CRCOED: Collaborative Representation-Based Classification using Odd Even Decomposition for Hyperspectral Remote Sensing Imagery

CRCOED: Collaborative Representation-Based Classification using Odd Even Decomposition for Hyperspectral Remote Sensing Imagery

Available online at www.sciencedirect.com ScienceDirect ScienceDirect Procedia Computer Science 00 (2018) 000–000 Available at Science www.scienced...

971KB Sizes 0 Downloads 21 Views

Available online at www.sciencedirect.com

ScienceDirect ScienceDirect

Procedia Computer Science 00 (2018) 000–000

Available at Science www.sciencedirect.com Procedia online Computer 00 (2018) 000–000

ScienceDirect

www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia

Procedia Computer Science 143 (2018) 458–465

8th International Conference on Advances in Computing and Communication (ICACC-2018) 8th International Conference AdvancesininComputing Computing and (ICACC-2018) 8th International Conference on on Advances andCommunication Communication (ICACC-2018)

CRCOED: Collaborative Representation-Based Classification using CRCOED: Collaborative Representation-Based Classification using Odd Even Decomposition for Hyperspectral Remote Sensing Odd Even Decomposition Imagery for Hyperspectral Remote Sensing Imagery Monika Sharmaaa*, Mantosh Biswasbb Monika Sharma *, Mantosh Biswas

Department of Computer Engineering, National Institute of Technology Kurukshetra, Haryana, 136119, India Department of Computer Engineering, National Institute of Technology Kurukshetra, Haryana, 136119, India

Abstract Abstract In the modern era exploiting the texture data is of excessive concern for classification of hyperspectral imagery (HSI). However, it isthevery challenging and prolonged taskdata to is obtain appropriate training samples which mostimagery discriminative features. In modern era exploiting the texture of excessive concern for classification of represents hyperspectral (HSI). However, This research paper suggested an improved representation classification by means of odd even decomposition it is very challenging and prolonged task tocollaborative obtain appropriate trainingbased samples which represents most discriminative features. theorem i.e. CRCOED. The proposed method employs that augment the based training samples using odd even decomposition theorem. This research paper suggested an improved collaborative representation classification by means of odd even decomposition In addition, sample is proposed characterized by means ofthat a linear combination of the training samples fromdecomposition the whole training set; theorem i.e. every CRCOED. The method employs augment the training samples using odd even theorem. after that reconstruction image willbybemeans performed with combination related involvement from each class. Thethe main purpose of set; the In addition, every sampleofis the characterized of a linear of the training samples from whole training suggested method is to produce the highest parity symmetrical demonstration of sample accurate and main robustpurpose classification. after that reconstruction of the image will be performed with related involvement from for each class. The of the Lastly, the method observed numerous HSI data sets revealed that this technique is more effectiveand in comparison to recent suggested is outcomes to produceonthe highest parity symmetrical demonstration of sample for accurate robust classification. methods. Lastly, the observed outcomes on numerous HSI data sets revealed that this technique is more effective in comparison to recent methods. © 2018 The Authors. Published by Elsevier B.V. © 2018 The Authors. Published by Elsevier B.V. This is an open accessPublished article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) © 2018 The Authors. by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing and Communication (ICACC-2018). Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing and Communication (ICACC-2018). Computing and Communication (ICACC-2018). Keywords: Hyperspectral Image Classification; Collaborative Representation Classification; Sparse Representation Classification Keywords: Hyperspectral Image Classification; Collaborative Representation Classification; Sparse Representation Classification

1. Introduction Hyperspectral images are derived from spaceborne or airborne hyperspectral sensors which generally comprises of 1. Introduction hundreds or thousands spectral [1-2]. orEvery spectral band signifies fine generally wavelength series of Hyperspectral images are of derived frombands spaceborne airborne hyperspectral sensors awhich comprises hundreds or thousands of spectral bands [1-2]. Every spectral band signifies a fine wavelength series of

* Corresponding author. Tel.: +91-8527463456 address:author. [email protected] * E-mail Corresponding Tel.: +91-8527463456 E-mail address: [email protected] 1877-0509 © 2018 The Authors. Published by Elsevier B.V. This is an open access under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) 1877-0509 © 2018 Thearticle Authors. Published by Elsevier B.V. Selection under responsibility of the scientific of the 8th International Conference on Advances in Computing and This is an and openpeer-review access article under the CC BY-NC-ND licensecommittee (https://creativecommons.org/licenses/by-nc-nd/4.0/) Communication (ICACC-2018). Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing and Communication (ICACC-2018). 1877-0509 © 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Selection and peer-review under responsibility of the scientific committee of the 8th International Conference on Advances in Computing and Communication (ICACC-2018). 10.1016/j.procs.2018.10.418

2

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000

459

electromagnetic spectrum. In recent times, due to highly informative band information enclosed in hyperspectral imagery has been a warm subject that has assisted in number of application fields [3-4] (e.g., agriculture, biomedical, process observing and quality control, military and environmental safety). Image classification is one of the greatest challenging tasks of HSI, in which depending upon spatial and spectral characteristics; pixels of hyperspectral image are assigned a unique label so that it characterized into numerous land cover classes. However, to design a novel HSI classification algorithm becomes a challenging task because of several issues such as availability of limited number of training samples and high dimensionality of HIS data (hundreds or thousands of spectral bands). Usually it presents an expletive of dimensionality problem i.e. Hughes phenomenon [5]. In recent HSI classification techniques, classifiers merely utilized spectral signs however, disregarding lot of useful spatial signatures at adjacent positions. Such classifiers are only trained in utilizing spatial signatures for enhancing the performance of HSI classification. Hence, the HIS classification problem must not only focus on utilizing the spectral signatures however, they should also concentrate on the spatial signatures of neighboring pixels. Several algorithms have been suggested to tackle the HSI classification problem, including K-nearest neighbors [6], neural networks (NN) [7], sparse representation based classification (SRC) [8], random forest [9], and support vector machine[10] and so on. Though, designing of an efficient HSI classification technique is still a challenging assignment due to unsettled numerous aspects like extraordinary dimensionality of spectral signatures and an inadequate quantity of training samples. The extraordinary dimensionality of HSI data and limited quantity of existing labeled samples frequently introduces a curse of dimensionality problem also known as Hughes phenomenon [5]. Hence, the small size of training samples is one essential issue that seriously degrades the performance of such existing state-of-art HSI classification techniques. Recently, state-of-art [11–14] has concerned in various applications, like human action recognition systems, face recognition systems, and hyperspectral remote sensed images. The main idea behind sparse representation based classification is that a direct grouping of entire sample training vectors along a sparseness parameter is represented as a testing sample and to recover the representation an L1 norm minimization is used and to assign final label a testing against every class has been performed through smallest reconstruction error. Collaborative representation based classification [15-17] have also been investigated for hyperspectral remote sensed imagery. For HSI classification, a collaborative representation based classifier (CRC), entitled nearest regularized subspace [15] was proposed. CRC entails that the collaborative coefficients of certain training samples are near to zero and the l2-norm are used to measure representation coefficients. The basic idea behind the word used collaboration is that all fragments in the vocabulary collaboratively represent a particular pixel. Recently CRC and SRC are widely used because they are quite easy to implement and such techniques could not make any assumption over data density dissemination. However, these classifiers ignored the spatial information and consider only the spectral signature. In this paper, we proposed an improved collaborative representation based classification using extended dictionary for HSI. Experimental results on two existent HSI data sets have validated that the proposed technique of classification is more effective than existing techniques. The remaining article is designed as follows: Related work is discussed in section II. Section III presents the proposed technique in detail. The information of the two HSI data sets and experimental results are presented in Section IV. Finally, Section V concludes this paper with a summary of the proposed work. 2.

Related Work

In general, the spatial distribution of surface ingredients is commonly consistent and signifies rich structural data that could be further united with spectral information for enhancing the performance of HSI classification algorithms. In recent spectral–spatial HSI classification techniques, the key goal is to retrieve valuable information related to spatial information that could efficiently reveal the spatial signatures of ingredients. In the recent state-ofart, it is generally observed that the spatial features have low priority over spectral information for HSI classification and numerous spatial structural signatures descriptors had been designed for utilizing the spatial information. In [22], the author modeled the spatial information by averaging the nearest values of pixels in minor frames for all the

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000

460

3

extracted bands. Such information was then merged with spectral information, and hence, classification accurateness was better than the recent spectral-oriented techniques. In [23], author introduced an effective image enhancement technique for exploiting the spatial information which further uses graph-based framework for combining such information with the spectral features. On the other hand, the most significant findings are dependent on the Joint Representation Model (JRM), the key goal of which is to utilize spatial-spectral features by considering test model as a group of its nearest pixels of an image (comprising the test pixel). Inspired by sparse representations, the authors in [16] utilized a joint sparse model for including the spatial features, wherein adjacent pixels surrounded by all pixels were concurrently characterized by training pixels and the demonstration coefficient matrix could be resolved by utilizing Simultaneous Orthogonal Matching Pursuit (SOMP) scheme. In [24], author showed that the collaborative representation classifier attains reasonable performance with considerably lesser complications than the conventional techniques related to sparse representations. Inspired by such outcomes, Li et al. [25] presented a remote collective representation technique for resolving the HSI classification issue, wherein a local adjustable dictionary learning technique was utilized for pruning the tested training samples. In [26], the authors introduced a Joint Collaborative Representation (JCR) framework, wherein adjacent pixels of images were linearly estimated via categorized training samples, thus, attained improved performance in comparison to the conventional nearest normalized subspace technique. 3.

Proposed Method

Conventional representation based classification methods comprising CRC and SRC methods have achieved good classification performance however they cannot give good results when the samples of similar class have massive dissimilarity with each other. A training sample randomly chosen from a dataset is a special statement of that particular class, it cannot imitate completely the deviations of statistical distribution for that particular class. However these lead the problem of uncertainty in datasets although the training samples are explanatory and useful. This prompts us to build a framework for collaborative representation using parity symmetry. Firstly, some perceptions are initially designed for ensuring this article independent and that would be utilized further in respective subdivisions. Consider for Z dataset, with C classes, m training samples and n test samples. Suppose m training samples denoted as vector matrix of m columns A= [a 1, a2……am], n test samples denoted as vector matrix of n columns B= [b1, b2……bn]. According to odd even decomposition theorem f(x) = fe(x) + fo(x), the symmetrical transformation function of f(x) is fr(x), here we assume that fr(x) = f(-x), so the mathematical representations becomes fo=(f-fr)/2 and fe=(f+fr)/2. Thus, assume that the original m column vector matrix is the symmetrical transformed matrix is denoted in the equation 1 and 2. According to decomposition of odd and even function, the original training matrix A can be decomposed by A=Ae+Ao, where, Ae=(A+Ar)/2 is an even image matrix, and Ao=(A-Ar)/2 represents an odd image matrix. Every single test sample B is nearly represented by a grouping of all odd and even training matrix A. By using even odd decomposition theorem the entire training samples are double of original training samples which that is represented with 2m-column vector matrix. However the training samples from datasets are offered for performing classification, every training sample have different influence on performance of classification. Though, with excessive samples, some samples might become isolative. These samples which some specific outside classes give

 A[1,1]  : A=   :   A[r ,1]

A[1,2] .... :

....

:

....

A[r ,2] ....

A[1, m]    :   A[r , m]

(1)

4

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000

 A[1, m] A[1, m  1]  : : Ar =   : :   A[r , m] A[r , m  1]

A[1,1]  :  :   A[r ,1]

.... .... .... ....

461

(2)

well-built contributions to the illustration of a test sample results to misclassification. Hence, there is need to discard these samples which are somewhat different from the test sample, as well as make use of the left behind samples to perform robust and accurate classification. So an iterative elimination method is used for redundant samples till it satisfied the predefined remaining number of classes. The iterative elimination process worked as a phase that determines the distance among the test sample and different classes and then the test samples posterior probability has been calculated, and using this give improved accurate classification result. Figure 1 illustrates the proposed classification algorithm. The collaborative coefficient α is used to identify the involvement of the ith training sample to reconstruct a test sample B. The posterior probability of testing sample P (a i /B) is calculated using

P (ai /B) α

disi

(3)

C

 disj j 1

Algorithm 1. Initialize the training samples A= Ae + Ao, where even training matrix and odd training matrix termed Ae and Ao 2.

Calculate the collaborative coefficient α α = (ATA+µ I) -1ATB

3.

Calculate the residual of every class uc = ||b-Acαc|| 2 / ||αc|| 2

4. 5. 6. 7. 8. 9.

2

2

Class with the highest deviation score to testing sample is removed and represented with arg c max{ uc } Update the training matrix A using A-Ac until the predefined termination condition is satisfied Find K nearest training samples from the left behind classes and after that performing the classification Assume that all the neighbors of cth c = (1,2,…..C) classes are As,……At then reconstruct the sample gc gc = β As+…. βt At Compute the deviation of reconstructed gc Dc = ||b-gc|| where c = 1,2….C Estimate label of b using Estimate (b) = arg c min{Dc} Fig. 1. Algorithm of Proposed CRCOED Classification

Here, ai used for the event for which a test sample B belongs to the ith class of the extensive training sample set, P (ai/B) represents the probability of ith class with respect to the testing sample B, disi denotes the Euclidean space between some testing samples and an average of ith class, C is number of classes. It is clear that a class which is tremendously distant from the testing sample have small posterior probability. Thus we assign P (a i/B) to zero as it has zero impact on classification. 4.

Experiments and Analysis

For experimental analysis, we have used two hyperspectral datasets [18], initial experiment data sets were retrieved through Reflective Optics System Imaging Spectrometer sensor (ROSIS). This picture covered Pavia city

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000

462

5

university situated in Italy, with a coverage of 610 × 340× 103 where 610 × 340 represents spatial coverage and 103 represents spectral bands. This has a spectral exposure ranging from 0.53 - 0.96-μm along with a spatial tenacity of 1.3 m. Pavia university dataset have 43,996 categorized samples with eight groups referred via ground reality chart. Fig 2 (a) (b) shows pavia university ground truth and pavia image along with band 90 respectively. Additional explicit info regarding the quantity of training samples is brief in Table 1.

(a)

(b)

(c)

(d)

Fig. 2. (a) Pavia University Ground Truth; (b) Pavia University image in band 90; (c) Indian Pines Ground Truth; (d) Indian Pines image in band 65.

Table 1. Different 9 classes and their available labeled samples for University of Pavia Dataset # 1 2 3 4 5 6 7 8 9

Class Ashpalt Meadows Gravel Trees Painted Metal Sheets Bare Soil Bitumen Self-Blocking Bricks Shadows Total

Samples 6631 18649 2099 3064 1345 5029 1330 3682 947 42776

The second hyperspectral dataset used in the experiment is collected using Airborne Visible Infrared Imaging Spectrometer (AVIRIS) sensor and has been taken over northwest Indiana’s Indian Pine test site. The Indiana pine image signifies a classification state having 145 × 145 pixels and 220 spectral bands with coverage ranging from 0.4- to 2.45-μm region of visible and infrared spectrum with a spatial resolution of 20 m. Indiana Pine dataset have 10,249 labeled samples and the scene is classified in 16 different thematic land-cover classes, and quantity of training samples is shown in Table 2. Fig 2 (c) (d) shows Indiana pines ground truth and Indiana pine image along with band 65 respectively. Experimental results for University of Pavia dataset and Indian Pine dataset are presented in TABLE 3 and 4 which shows the complete classification accurateness and kappa coefficient for dissimilar classification methods. The classification results obtained using CRCOED are considerably better than CRT [19], SRC [20] and CRC [21]. Fig. 3 illustrates an impact of diverse training example dimensions for Pavia university dataset and Indiana pines dataset. For pavia university dataset the training sample dimensions is transformed from 0.06 - 0.10, and for indian pines dataset training sample dimensions is altered from 0.1 to 0.16. The overall accuracy and kappa index obtained for university of pavia is 95.08 and 0.9528 respectively which is better than CRT, CRC and SRC as shown in TABLE 3. For indian pines dataset overall accuracy and kappa index obtained 93.85 and 0.9395 which is better than existing state of art techniques shown in TABLE 4.

6

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000

463

Table 2: Different classes and their available labeled samples for Indian Pines Dataset # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Class Alfalfa Corn-notill Corn-mintill Corn Grass-pasture Grass-trees Grass-pasture-mowed Hay-windrowed Oats Soybean-notill Soybean-mintill Soybean-clean Wheat Woods Build-grass-Trees-Drives Stone-Steel-Towers Total

Samples 46 1428 830 237 483 730 28 478 20 972 2455 593 205 1265 386 93 10249

Table 3: Complete Classification accurateness and kappa index accuracy of dissimilar classification procedures for University of Pavia Dataset # 1 2 3 4 5 6 7 8 9

Class Ashpalt Meadows Gravel Trees Painted Metal Sheets Bare Soil Bitumen Self-Blocking Bricks Shadows OA K-index

CRT 75.49 79.81 77.54 80.51 62.52 76.75 72.25 75.41 71.85 74.68 0.7532

SRC 88.45 87.28 71.56 79.64 80.52 88.95 79.80 79.85 85.65 82.41 0.8453

CRC 90.02 79.85 90.76 86.42 89.85 85.75 92.54 86.98 93.10 88.36 0.8958

CRCOED 95.41 96.55 92.85 95.41 97.26 92.85 93.86 95.25 96.35 95.08 0.9528

Table 4: Complete Classification accurateness and kappa index accuracy of dissimilar classification methods for Indian Pines Dataset # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Class Alfalfa Corn-notill Corn-mintill Corn Grass-pasture Grass-trees Grass-pasture-mowed Hay-windrowed Oats Soybean-notill Soybean-mintill Soybean-clean Wheat Woods Build-grass-Trees-Drives Stone-Steel-Towers OA K-index

CRT 76.67 77.09 71.12 72.30 75.87 80.54 84.31 84.50 77.65 72.56 75.53 79.36 73.91 81.75 76.59 78.55 77.39 0.7809

SRC 81.78 80.67 81.56 83.80 85.47 78.98 82.36 85.64 80.54 82.90 86.50 81.57 79.75 82.59 72.48 77.54 81.50 0.8264

CRC 89.82 85.30 89.88 87.34 89.53 85.29 90.00 91.31 86.75 85.61 93.26 92.18 89.38 89.74 90.43 84.85 88.79 0.8902

CRCOED 95.35 95.20 94.52 93.50 92.83 95.73 92.15 95.08 92.67 90.78 91.56 93.46 92.30 95.56 96.73 94.18 93.85 0.9395

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000 100

1

90

0.9

Kappa

Overall Accuracy (%)

464

80 70 60 0.06

0.8 0.7

CRT

SRC

CRC

CRCOED

0.065 0.07 0.075 0.08 0.085 0.09 0.095 Ratio of Training Samples to Total Labeled Samples

0.06

0.1

(a)

0.1

0.95

90

0.9

85

Kappa

Overall Accuracy (%)

CRT SRC CRC CRCOED 0.065 0.07 0.075 0.08 0.085 0.09 0.095 Ratio of Training Samples to Total Labeled Samples

(b)

95

80 75 70 0.1

7

CRT

SRC

CRC

CRCOED

0.11 0.12 0.13 0.14 0.15 0.16 Ratio of Training Samples to Total Labeled Samples

(c)

0.85 0.8 0.75 0.7 0.1

CRT

SRC

CRC

CRCOED

0.11 0.12 0.13 0.14 0.15 0.16 Ratio of Training Samples to Total Labeled Samples

(d)

Fig 3. (a) Pavia Overall Accuracy (b) Pavia Kappa (c) Pines Overall Accuracy (d) Pines Kappa

5.

CONCLUDING REMARKS

In this article, the author suggested a improve CRC method using odd even decomposition theorem, which offers great collaborative representation for hyperspectral image classification. The strong point of the method is that it effectively makes use of parity symmetrical scheme. The experimental outcomes confirmed that CRCOED realized enhanced overall classification accurateness and Kappa index than the CRT, SRC and CRC. The evaluation outcomes have confirmed that the demonstrations of our technique are efficient for sensing and retrieving the spatial features of HIS. Evaluation outcomes on such HSI data sets have revealed that the proposed framework is more efficient than recent HSI classification approaches. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

Kramer, H. J. (2002). Observation of the Earth and its Environment: Survey of Missions and Sensors. Springer Science & Business Media. Campbell, J. B., & Wynne, R. H. (2011). Introduction to remote sensing. Guilford Press. Bioucas-Dias, J. M., Plaza, A., Camps-Valls, G., Scheunders, P., Nasrabadi, N., & Chanussot, J. (2013). Hyperspectral remote sensing data analysis and future challenges. IEEE Geoscience and remote sensing magazine, 1(2), 6-36. Du, Q., Zhang, L., Zhang, B., Tong, X., Du, P., & Chanussot, J. (2013). Foreword to the special issue on hyperspectral remote sensing: Theory, methods, and applications. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(2), 459-465. Hughes, G. (1968). On the mean accuracy of statistical pattern recognizers. IEEE transactions on information theory, 14(1), 55-63. Ma, L., Crawford, M. M., & Tian, J. (2010). Local manifold learning-based $ k $-nearest-neighbor for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 48(11), 4099-4109. Tang, J., Deng, C., & Huang, G. B. (2016). Extreme learning machine for multilayer perceptron. IEEE transactions on neural networks and learning systems, 27(4), 809-821. Sun, W., Yang, G., Du, B., Zhang, L., & Zhang, L. (2017). A sparse and low-rank near-isometric linear embedding method for feature extraction in hyperspectral imagery classification. IEEE Transactions on Geoscience and Remote Sensing, 55(7), 4032-4046. Ham, J., Chen, Y., Crawford, M. M., & Ghosh, J. (2005). Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(3), 492-501. Chang, C. C., & Lin, C. J. (2011). LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and

8

[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]

Monika Sharma et al. / Procedia Computer Science 143 (2018) 458–465 Monika Sharma and Mantosh Biswas/ Procedia Computer Science 00 (2018) 000–000

465

technology (TIST), 2(3), 27. Ul Haq, Q. S., Tao, L., Sun, F., & Yang, S. (2012). A fast and robust sparse approach for hyperspectral data classification using a few labeled samples. IEEE Transactions on Geoscience and Remote Sensing, 50(6), 2287-2302. Wang, J., Lu, C., Wang, M., Li, P., Yan, S., & Hu, X. (2014). Robust face recognition via adaptive sparse representation. IEEE transactions on cybernetics, 44(12), 2368-2378. J. Wright , A.Y. Yang , A. Ganesh , S.S. Sastry , Y. Ma , Robust face recognition via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell. 31 (2) (2009) 210–227. Shrivastava, A., Patel, V. M., & Chellappa, R. (2014). Multiple kernel learning for sparse representation-based classification. IEEE Transactions on Image Processing, 23(7), 3013-3024. Li, W., Tramel, E. W., Prasad, S., & Fowler, J. E. (2014). Nearest regularized subspace for hyperspectral classification. IEEE Transactions on Geoscience and Remote Sensing, 52(1), 477-489. Chen, Y., Nasrabadi, N. M., & Tran, T. D. (2011). Hyperspectral image classification using dictionary-based sparse representation. IEEE Transactions on Geoscience and Remote Sensing, 49(10), 3973-3985. Iordache, M. D., Bioucas-Dias, J. M., & Plaza, A. (2014). Collaborative sparse regression for hyperspectral unmixing. IEEE Transactions on Geoscience and Remote Sensing, 52(1), 341-354. Hyperspectral Remote Sensing Scenes. Available at: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes Li, W., Du, Q., & Xiong, M. (2015). Kernel collaborative representation with Tikhonov regularization for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 12(1), 48-52. Li, W., Du, Q., Zhang, F., & Hu, W. (2015). Collaborative-representation-based nearest neighbor classifier for hyperspectral imagery. IEEE Geoscience and Remote Sensing Letters, 12(2), 389-393. Ma, H., Gou, J., Wang, X., Ke, J., & Zeng, S. (2017). Sparse Coefficient-Based ${k} $-Nearest Neighbor Classification. IEEE Access, 5, 16618-16634. Camps-Valls, G., Gomez-Chova, L., Muñoz-Marí, J., Vila-Francés, J., & Calpe-Maravilla, J. (2006). Composite kernels for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 3(1), 93-97. Velasco-Forero, S., & Manian, V. (2009). Improving hyperspectral image classification using spatial preprocessing. IEEE Geoscience and Remote Sensing Letters, 6(2), 297-301. Zhang, L., Yang, M., & Feng, X. (2011, November). Sparse representation or collaborative representation: Which helps face recognition?. In Computer vision (ICCV), 2011 IEEE international conference on (pp. 471-478). IEEE. Li, J., Zhang, H., Huang, Y., & Zhang, L. (2014). Hyperspectral image classification by nonlocal joint collaborative representation with a locally adaptive dictionary. IEEE Transactions on Geoscience and Remote Sensing, 52(6), 3707-3719. Li, W., & Du, Q. (2014). Joint within-class collaborative representation for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(6), 2200-2208.