Spectral segmentation based dimension reduction for hyperspectral image classification

ABSTRACT Hyperspectral images (HSI) contain a wide range of information, the most prominent technology for observing the earth. However, using an original HSI high-dimensional datacube, the classification task faces significant challenges since it has a high computational cost. As a result, dimensionality reduction is indispensable. A dimension reduction method has been introduced in this paper, including feature extraction and feature selection to obtain feature subsets. Minimum Noise Fraction (MNF) is a popular feature extraction method for HSI, requiring a high computational capability. We propose a segmented MNF that divides the complete HSI into groups utilising normalised cross-cumulative residual entropy (nCCRE). An nCCRE-based feature selection is also employed to improve the quality of the chosen features using the max-relevancy min-redundancy measure. The support vector machine (SVM) classifier is used on two real HSI to evaluate the efficiency of the extracted subsets.


Introduction
Because of the extraordinary improvement of hyperspectral remote sensors, very narrow and continuous spectral channels are possible to capture wavelength ranges from 0.4 μm to 2.5 μm (Richards and Jia 1999).As a result, Hyperspectral Images (HSIs) have excellent spectral resolution, for example, 0.01 μm for Indian Pine HSI captured by Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) sensor (Campbell and Wynne 2011) allowing researchers to investigate ground objects (Chen et al. 2015).Here, each spectral image is termed as a feature for classification as they contain individual responses of the ground surface (Islam et al. 2019, Lei et al. 2022).
Since HSI is a rich source of information so that using the original image, image classification has some significant challenges (Jia et al. 2013).Input HSI bands are highly correlated, and hyperspectral sensors capture the spectral image on a continuous range; that is how some spectral channels may contain unusual information about the earth's surface, which creates noisy images (Tarabalka et al. 2009(Tarabalka et al. , 2010)).Also, the computational cost for such an HSI data cube is excessive (Sellami and Tabbone 2022) and the classification accuracy of this high-dimensional data cube may not be satisfactory because the training samples are insufficient and unbalanced (Lillesand et al. 2015).As a result, as the number of bands increases, the classification accuracy decreases gradually, known as the 'Hughes phenomena' or curse of dimensionality effects (Hughes 1968).Therefore, reducing high-dimensional HSI data to an informative subset of features is essential to improve classification accuracy.Different feature reduction methods can be employed to facilitate the dimensionality effect (Sellami and Farah 2019, Islam et al. 2020b,, Uddin et al. 2020, Yuan et al. 2022).
Atmospheric effects add noisy data to the HSI, and it needs to be adjusted (Islam et al. 2020a;Uddin et al. 2020).Although Principal Component Analysis (PCA) is a popular feature extraction method, PCA does not accurately reflect the HSI data noise ratio (Chang and Du 1999, Rodarmel and Shan 2002, Luo et al. 2016).On the other hand, Minimum Noise Fraction (MNF) is the most commonly used feature extraction method for noisy data, which extracts features depending on image quality.
However, for high-dimensional data like HSI, the MNF transformation needs an extensive amount of computational power (Lixin et al. 2015).Moreover, MNF may not find the local characteristics of data because it considers the global aspects of the data (Islam et al. 2020a).Additionally, the top features are solely based on SNR, which may not always extract useful information for classification tasks.For these reasons, Lixin et al. (2015) proposed a segmented minimum noise fraction (SMNF) using a correlation matrix image.Here, the whole HSI is primarily partitioned into segments utilising the band-to-band correlation matrix image.Then, MNF is applied to each section individually.As a result, the computational cost is significantly reduced.
Let X be the input vector of a hyperspectral image represented as X 2 R p , where p denotes the number of spectral bands in the image.MNF maps X to a lowerdimensional feature space Z 2 R k via a linear transformation, where k is significantly less than p and can be expressed as In this case, the MNF produces a linear projection using (1) and provides a new feature space, Z, in which the top-ranked few components are the most capable of reconstructing the original image.It has also been observed that the linear transformation given by ( 1) is time-consuming, as it requires p × p multiplication and p × (p-1) additions per pixel (Lixin et al. 2015).As a result, the computing cost of hyperspectral data transformation is inefficient.Additionally, the transformation tasks are biased in the high variance bands.Although correlation is a linear similarity measurement tool and can only be applicable when the data is linear (Sevilla et al. 2005, Hossain et al. 2011), it cannot effectively handle nonlinear data.On the other hand, the information-based CCRE measure reflects the authentic relevancy of data for nonlinear data (Hasan et al. 2012, Hossain et al. 2017), which is motivated by the proposed method.Because CCRE is not bound to a specific range, a band-to-band normalised CCRE (nCCRE) is measured in the proposed approach.The nCCRE measure generates a 2D matrix image.The complete HSI data is partitioned into segments using the 2D matrix image.Here, MNF is applied to each segment for effective feature extraction with minimum computational cost.Moreover, a Spectrally Segmented MNF (SSMNF) is investigated to compare with the proposed method similar to Spectrally-Segmented-PCA (SSPCA) is proposed by Tsai et al. (2007), where segmentation is done using spectral wavelength.To accomplish the SSMNF goal, the dataset is divided into bands captured at visible wavelengths (VIS: typically, 0.4-0.7 m), near-infrared wavelengths (NIR: typically 0.7-1.1 m), and short-wavelength infrared (SWIR: typically 1.1-3.0m), which can also be broken down into SWIR-1 (SWIR-1: typically 1.1.35m) and SWIR-2 (SWIR-2: typically 1.35 m).Thus, the implementation steps are identical to those for SMNF, except that the segmentation criteria are based on the spectral wavelength regions of the HSI.
Additionally, feature selection is done using nCCRE and applied among the MNF's extracted image and the available class, tackling the minimum redundancy and maximum relevance (mRMR) criteria.As a result, a subset of informative features has been selected for classification.For classification, the kernel Support Vector Machine (KSVM) is applied.The proposed method is compared with conventional approaches, and it is found to outperform based on the performance measurement tools such as classification accuracy, average accuracy, F1 score, and Kappa.
The rest of the paper is arranged as follows: We describe relevant work in Section 2. The proposed nCCRE-based segmentation technique is then shown in Section 3.This is followed by explaining the proposed feature extraction (segMNF).Details are then explained to identify features using nCCRE and improve their quality.We have undertaken extensive experimentation on two genuine HSI datasets with the proposed feature reduction strategy.Section 5 closes the paper and summarises the findings.

Minimum Noise Fraction (MNF) for HSI
MNF can estimate the most informative features of an HSI.MNF represents the superposition of two PCAs (Green et al. 1988).MNF transformation consists of two separate steps (a) decorrelate and rescale the noise in the data using the noise covariance matrix (noise whitening).Thus, the noise has a unit variance and no correlations between bands.(b) apply a standard PCA transform to the noisewhitened data.The MNF is appropriate as it selects the SNR rather than the global variance to measure the relevancy of features.
If the input hyperspectral image is X, and X = [x 1 , x 2 . . . . . .x p ] T , where p represents the number of image bands, and N is the length of each image.Because of the atmospheric properties, HSI has noise in the signal then, X is X = M + N, where M and N are the noiseless original image and noise, respectively.Therefore, equation 2 calculates the covariance of HSI data.
where, � M ; � N are the covariances of the signal, and noise respectively.The linear MNF transformation is defined as where the matrix W is the eigenvector matrix of � À 1 � N ¼ ΛW.The components are arranged according to the SNR value in the MNF transformation.So, the few MNF components contain informative and less noisy features for classification.

Segmented Minimum Noise Fraction (SMNF)
When data sets contain a small number of discrete spectral bands such as multispectral images, it has been examined that the traditional MNF is feasible and produces acceptable results for extracting suitable features (Xue et al. 2021).In the case of HSIs, applying conventional MNF to entire datasets may result in influenced classification results, as well as exponential increases in computational cost and processing time (Lixin et al. 2015).Additionally, it is examined whether adjacent image bands in hyperspectral images are highly correlated compared to bands further apart and highly correlated image bands that appear in blocks (Datta et al. 2014).Furthermore, MNF extracts the HSI data based on the global statistics of the images, but it fails to extract the local information from the images (Zabalza et al. 2014).By avoiding the low-correlated bands between the highly correlated blocks, SMNF modifies the conventional MNF (Lixin et al. 2015).
Let X be the spectral vector of a pixel in the data matrix, defined as x n ¼ ½x n1 x n2 x n3 x n4 . . . . . .:x nF � where i 2 ½1; S�: The full data matrix is then subdivided into k subgroups based on the correlation between the image bands.In this case, n i denotes the subgroups formed after HSI segmentation, and d i represents the number of consecutive bands in each subgroup.Each subgroup's covariance matrix is computed separately, and then eigen decomposition is performed on each calculated covariance matrix in each subgroup.As a result, the final projection matrix is found by merging the individual projection matrix for each n i in the entire dataset.

Spectrally Segmented Minimum Noise Fraction (SSMNF)
As previously stated, SMNF obtains improved classification performance by extracting the local characteristics of the entire HSI through the extraction of global structures from each highly correlated subgroup dataset.However, this correlation-based segmentation method is inefficient for detecting plant classes, as it frequently results in the loss of critical information about the HSI (Tsai et al. 2007).As a result, Spectral Segmented MNF (SSMNF) was introduced inspired by classical SMNF, to extract the most precise spatial data from HSI to classify plant coverage.Similar to SMNF, SSMNF applies traditional MNF to regionally segmented subgroup datasets with spectral wavelengths.To accomplish the SSMNF goal, the dataset is divided into bands captured at visible wavelengths (VIS: typically, 0.4-0.7 m), near-infrared wavelengths (NIR: typically 0.7-1.1 m), and short-wavelength infrared (SWIR: typically 1.1-3.0m), which can also be broken down into SWIR-1 (SWIR-1: typically 1.1.35m) and SWIR-2 (SWIR-2: typically 1.35 m).Thus, the implementation steps are identical to those for SMNF, except that the segmentation criteria are based on the HSI's spectral wavelength regions.

Cross Cumulative Residual Entropy (CCRE) for HSI
Cross Cumulative Residual Entropy (CCRE) is a popular similarity measurement tool (Wang and Vemuri 2007).CCRE can be used to measure the similarity of two images in which cumulative residual distribution is used instead of probabilistic distribution (Rao et al. 2004).The CCRE of two images I and J is given by where L represents the most significant pixel value of the images, G(u) is the joint cumulative residual distribution, G I (u) is the marginal cumulative residual distribution of I, and P J (v) is the marginal probability of J. CCRE is primarily used to find the relevancy between the HSI image bands.

Motivation
As described in the introduction, the MNF transformation requires considerable computational time when dealing with high-dimensional HSI data.Additionally, MNF may miss local characteristics of data because it focuses on the global aspects of the data by considering the whole image.A segmented minimum noise fraction (SMNF) based on a correlation matrix image was proposed to address these issues.The entire HSI is partitioned into segments using the band-to-band correlation matrix image.Then, MNF is applied individually to each section.As a result, the computational cost of the algorithm is significantly reduced.However, correlation is a linear similarity measure and can be used only with linear data, and it is inefficient with nonlinear data.Therefore, correlation is not an efficient similarity measurement tool.On the other hand, cross cumulative residual entropy (CCRE) can measure similarity even if the data is nonlinear.As a result, the informationbased CCRE measure reflects the original relevance of data in the proposed method.Because CCRE is not restricted to a specific range, the proposed approach measures band-to-band normalised CCRE (nCCRE).nCCRE produces a twodimensional matrix image.Using the 2D matrix image, the entire HSI data set is partitioned into several segments.MNF is applied to each segment in order to extract valuable features at a low computational cost.

Contributions
For the proposed method, the main contributions are summarised as follows.
(i) We introduced an entropy-based band segmentation method, where the nCCRE matrix image effectively divides the complete HSI.(ii) A hybrid feature reduction method is proposed to ensure both spectral and spatial attributes, including feature extraction and selection.
(iii) A normalised CCRE-based feature selection is applied to improve the quality of the chosen features by utilising the max-relevancy min-redundancy measure.
The proposed dimension reduction method encompasses three main steps: (i) a CCRE measure-based band segmentation, (ii) feature transformation through the implementation of the proposed band segmentation using classical MNF, and (iii) feature selection using normalised CCRE based on max-relevancy min-redundancy measures on the transformed features.The following section describes the proposed dimension reduction methodology in detail.

Proposed normalised CCRE (nCCRE) based band segmentation
The CCRE value is unbounded to a precise range, which creates difficulty in measuring the actual relationship of the relevant image bands of the HSI (Hossain et al. 2013).Therefore, for normalising the relevancy measure, the CCRE value can be mapped to a precise range [0,1] as follows: where, Ĉ I; J ð Þ is the nCCRE between the image bands I and J.For the segmentation of the entire HSI, we have calculated the band-to-band nCCRE measure and created an nCCRE matrix image.For instance, we have plotted the nCCRE matrix in image notation of the Indian Pines HSI dataset in Figure 1(a) and 1(b) shows the average nCCRE within these four subgroups (four diagonal blocks).The segmentation is done based on the results acquired by considering the nCCRE values that exceed the user-defined threshold.For AVIRIS data, the threshold value is 0.6.The userdefined threshold identifies the total number of segments.If the threshold increases, the total number of groups also increases, and uncorrelated bands are grouped.The experiment demonstrates that the nCCRE values of the adjacent HSI image bands are higher than the image bands further apart (Paul et al. 2015).The attached supplement files illustrate the details band segmentation for the proposed method (nCCRE-based segmentation) and the baseline approaches SMNF and SSMNF in Tables S1, S2 and S3, respectively.

Proposed Segmented Minimum Noise Fraction (segMNF)
Although that the traditional MNF is feasible and conveys acceptable results in extracting suitable features when the multispectral dataset has few discrete bands (Rodarmel and Shan 2002) but for hyperspectral imageries, applying conventional MNF to the entire dataset may generate influenced results in addition to an exponential increase in computational cost and processing time (Lixin et al. 2015).Nonetheless, the adjacent image bands of HSI are highly correlated compared to the bands further apart, and the highly correlated image bands appear in segments.Furthermore, MNF extracts the HSI data considering the global characteristics of the HSI and fails to extract the local information.Thus, SMNF modifies the application of conventional MNF by avoiding the low correlated bands between the highly correlated blocks.

Algorithm 1. SegMNF
Step (i).Begin {X: the complete dataset of the HSI of size P � S all , where P is the total number of spectral bands called features and S all is the total number of pixels in the HSI} Step (ii).Compute band-to-band nCCRE matrix for generating the subgroups of X Step (iii).Divide X into k subgroups based on the nCCRE matrix image.
Step (iv).For each subgroup dataset, do Calculate the projection matrix through MNF transformation Step (v).Merge individual projection matrices consecutively to construct the over all projection matrix The correlation-based HSI segmentation used in SMNF may still be infeasible due to the nonlinear relationships among the bands.So we propose an nCCRE-based band segmentation mechanism to deal with image band linearity and nonlinearity.The proposed segMNF extracts local data characteristics efficiently rather than global HSI statistics.SegMNF also reduces the computational cost of traditional MNF, lowering the overall computational cost of HSI classification.Algorithm-1 summarises the concept of the proposed segMNF method (Figure 2).

Hybrid proposed dimension reduction using normalised CCRE based feature selection over the feature extracted data
To select the subset of relevant features, we measured nCCRE between the new features generated from segMNF Z i and the available training class labels C. Thus, the most informative feature is calculated and assigned to the feature subspace, S. (Peng et al. 2005) where, V represents the first feature, selected for classification and assigned to S. In this way, we can sort the MNF components, and the first few components may be the informative feature for classification.However, the selected features using Equation (6) may have some redundancy.The objective is to maximise the relevance and minimise the redundancy among the selected features.The (k + 1) th feature can be selected using the greedy approach as and assigned to the already selected subsets, S. Therefore, the chosen model of subspace detection can be defined as: However, the CCRE value in the above equation is not bounded to a specific range.Therefore, the value G(Z i ,k) in the above method is difficult to use directly, as it may be affected by the entropy of two variables and not bound to a specific range.Therefore, the normalised CCRE value can be used in Equation ( 7).The normalised CCRE between MNF component Z and class label C can be defined as Here, we proposed nCCRE utilising the normalised CCRE in Equation ( 7), and as a result, the proposed subsets detection method have been defined as Using Equation ( 9), the highest value of the difference may be negative, and the selected features will be different from the already selected features, which is unacceptable.Therefore, Ĝ Z i ; K ð Þ has been considered to be positive, i.e.Ĝ Z i ; K ð Þ > 0 in this analysis.
It is likely to select destructive features using the method given in (9).The highest difference value may occur from two small values, and then the chosen features are weakly associated with the target.To avoid difficulty, the user-defined threshold (T) is introduced as The user-defined threshold, T, is applied in the preprocessing step to reduce the searching space for the greedy approach and remove the noisy feature.Following is a summarised algorithm for the proposed hybrid feature reduction method.Output the set S, containing the selected features.Following is a summarised algorithm for the proposed feature reduction method.Here, S represents the set of selected features.
Calculate ĈCCRE (Zi, C) and apply threshold T to eliminate noisy features Zi, if ĈCCRE (Z i , C )<T Step (iii).Initialise the feature subspace to null, S 0 = {Ф} Step (iv).Select the 1 st feature, Z j using Equation ( 6) through utilising Equation ( 8) and set S 1 ¼ S 0 [ Z j Step (v).For selecting the remaining features, do Step (vi).Utilise Equation ( 9) and update S Step (vii).Output S as the subspace of useful features

Remote sensing data sets
We employed two publicly available real HSI data sets for experimental analysis, primarily used for HSI categorisation.In this experiment, we employed both mixed agriculture and urban data.Figure S1 is a visual depiction of the two datasets.AVIRIS Indian Pine (IP) data consists of 220 image bands captured by NASA AVIRIS sensor in June 1992 (Williams et al. 2017).The data set comprises 145 × 145 spatial resolutions, with sixteen classes represented in the ground truth image (Soelaiman et al. 2009).Additionally, the spectral resolution of the data is 0.1 µm.We have not used "Grass/Pasture mowed" and "Oats" in the experiment due to the inadequate training data.HYDICE data consists of 191 image bands with a spatial resolution of 1280 × 307 pixels.The data were collected over the Washington DC MALL by the Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensor in 1995 (Huang et al. 2016).The ground truth image consists of seven classes in the scene (Huang et al. 2016).We have not used "paths" due to inadequate training data in this analysis.

Feature extraction and feature selection results
For feature extraction, the complete HSIs are first divided into segments using the nCCRE matrix image.Then, the usual MNF method is implemented in each segment.Finally, the individual projection matrix has merged for further processing.Here, the segmentation is done based on the results acquired by considering the nCCRE values that exceed the userdefined threshold along with the diagonal direction.For example, in the AVIRIS data, the threshold value is 0.6, searching for the edge in the nCCRE matrix images and the diagonal direction where the mean nCCRE exceeds the threshold.The optimum threshold value we have found here using a trial-and-error scheme.The user-defined threshold identifies the total number of segments.If the threshold increases, the total number of groups is also increased, and uncorrelated bands are grouped.For AVIRIS and HYDICE data, the detailed segmentation using proposed and correlation matrix image-based SMNF is illustrated in Tables S1 and S2.Table S3 also depicted the band segmentation region-based SSMNF.
After that, we have applied the traditional MNF method to each segment.The MNF generates new features utilising the transformation rules.The newly created features are used in feature selection to improve the subset of features.As explained in the previous section, feature selection has been performed on the newly created features using nCCRE.There is a possibility to choose the noisy feature using Equation (9).Additionally, two weak MNF components can generate a high variance, and the programme erroneously chooses an unusable component that is poorly related to the ground truth image.For that reason, we have used a user-defined threshold T to avoid the less informative feature in classification.The benefit of T is that it initially rejects the noisy element in the preprocessing steps.Therefore, there is less chance to select a noisy feature.As a result, the order of selected features is identified.
The proposed method has been compared with other traditional methods like MNF, SMNF, SSMNF, CCRE, and nCCRE methods and some hybrid methods like MNF-CCRE SMNF-CCRE SSMNF-CCRE for analysing the robustness of the proposed method.For all studied and proposed techniques, the order of the ranked features is listed in Table S5.Table S3 shows the notation for all investigated and proposed methods in the attached supplementary file.

Parameter tuning for classification
We have classified the HSI using the kernel SVM classifier utilising the proposed reduced features for performance analysis.Here, we have used the Gaussian kernel (RBF) and 10-fold cross-validation scheme in the SVM classifier for selecting the best cost parameter C and kernel width γ (Hsu et al. 2003).After applying the cross-validation in the proposed method, we have found C = 2.2 and γ = 1.2 for AVIRIS data, the parameter C = 2.1, and γ = 1.4 for HYDICE data as the best parameter.For both AVIRIS and HYDICE data, the first 10 features are selected for classification.The complete parameters tuning for the SVM classifier of proposed and studied methods are represented in Tables S6 and S7, respectively, for the two HSI data.
For classification, both the training and testing data are picked based on the groundtruth image, (Figure S1).We have used all the available samples of the AVIRIS image in this analysis.Almost 12% for training and 88% for testing have been used here.On the other hand, we have used a total of 5435 pixels of HYDICE image in the experiment, almost 30% for training and 70% for testing.

Performance evaluation metrics
In this study, the widely used quality indexes, i.e. the overall accuracy (OA), the average accuracy (AA), the Kappa coefficient, and the F1 score, are used to evaluate the performance of the proposed method.OA measures the percentage of all correctly classified pixels, which can be calculated as follows: In this equation, C represents the number of classes, and A represents the confusion matrix, which is obtained by comparing the classification map with the ground truth image.The number of samples belonging to class i and labelled as class i is represented by A ii , whereas the total number of test samples is represented by B.
AA represents the average value of the percentage of the correctly classified pixels for each class which is calculated as follows: The Kappa coefficient estimates the percentage of classified pixels corrected by the number of agreements that would be expected purely by chance.
One of the most widely used metrics for evaluating the effectiveness of a classifier is the F1 score.The F1 score is calculated as follows: Here, Precision and Recall can be calculated as follows: Here, we have calculated the precision and recall for multiclass classification.

Overall classification accuracy
We have assessed the robustness of the proposed approach concerning the overall classification accuracy.Here, the selected features are applied for measuring the classification accuracy (Table S5).The robustness of the proposed method is assessed by comparing it with studied MNF, SMNF, SSMNF, CCRE, and nCCRE methods, as well as some hybrid methods like MNF-CCRE, SMNF-CCRE, and SSMNF-CCRE.The overall classification accuracy of the proposed and studied techniques is depicted in Figure 3 for the two data sets regarding the sequence of the ranked feature.Using the first 10 features and without feature reduction, we have calculated the overall classification accuracy of AVIRIS data and found 65.98%.This result motivates the feature reduction method for the classification task.The overall classification accuracy of the conventional and hybrid methods using the first 10 features is measured and listed in Table 1.The proposed segMNF-nCCRE methods show the classification accuracy of 95.66%, which is higher than the studied techniques.
For the HYDICE data set, the overall classification accuracy of the conventional and hybrid studied methods using the first 10 features is also measured and listed in Table 2.The proposed segMNF-nCCRE methods show the classification accuracy of 98.56%, which is even higher than the studied techniques.Here, we also measured the other classification performance measurement parameter and represented them in Table 2, showing that the proposed segMNF-nCCRE method outperforms all the performance measurement parameters.
The proposed method is evaluated with varying training samples 10% and 30%, respectively, for better performance evaluation for each dataset.Table S8 shows the OA, AA, and kappa of the proposed and studied method under different classification conditions.This table shows that the OA, AA, and Kappa values of all dimension reduction methods improve as the proportion of training samples increases.The highest OA value under the same classification condition has been marked in bold, and the proposed methods again outperform the studied methods.

Feature space analysis
The robustness of the proposed method is also evaluated using feature space analysis.Figure 4 shows the feature space of AVIRIS data using the standard MNF, SMNF, SSMNF, segMNF, segMNF-CCRE, and the proposed method.
For ease, we have shown only eight classes in the feature space.From Figure 4(a), we can see that almost all class labels overlap.Whereas Figures 4(b) and 4(c) show some overlap between the class labels.On the other hand, the proposed method depicted in Figures 4(d) and 4(e), classes are more separable than the studied approach.Additionally, from Figure 4(f), we can see the benefit of feature selection over the extracted data, which is more separable than Figure 4(d) and 4(e).
Figure 5 also shows the feature space of the conventional MNF, SMNF, SSMNF, segMNF, segMNF-CCRE, and proposed methods of the HYDICE data.The result also demonstrates that the proposed feature reduction approach can separate the classes more efficiently than the studied techniques.

Computational time
We have calculated the total computational time to measure the efficiency of the segMNF method compared with the classical MNF method (Table 3).The proposed method was tested on a personal computer equipped with an Intel Core i5 3.2 GHz CPU, 8GB of RAM, and running on the Microsoft Windows 10 operating system.We found an improvement in computational cost for both datasets, which implies that the proposed method is more advanced than the studied technique.

Conclusion
Since HSI is a high-dimensional data cube, feature reduction is necessary for efficient classification, including decreasing the total computational time.We have improved the SMNF method by considering the nCCRE matrix image instead of the correlation matrix image.After partitioning the complete HSI image using the nCCRE matrix image, MNF is applied for efficient feature extraction.That is how MNF extracts the local characteristics of HSI data and reduces the total computational cost.The nCCRE matrix image improves data partitioning and is used to select the optimal set of features after feature extraction.From the experimental analysis, we can see that the nCCRE measure enhances the quality of the chosen feature using a minimum redundancy scheme.The classification performance and other performance assessments represent the improvement of the proposed method compared to the studied techniques.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Figure 2 .
Figure 2. The working architectural diagram of the proposed segMNF is segmented using a normalised CCRE matrix image.