1+ <!DOCTYPE html>
2+ < html lang ="en ">
3+
4+ < head >
5+ < meta charset ="UTF-8 " />
6+ < meta name ="viewport " content ="width=device-width, initial-scale=1 " />
7+ < title > MDL4OW Open-Set Hyperspectral Image Classification. Few-shot Hyperspectral Image Classification With Unknown
8+ Classes Using Multitask Deep Learning.</ title >
9+ < link rel ="stylesheet " href ="https://skrisliu.com/css/font.css ">
10+ < link rel ="stylesheet " href ="https://skrisliu.com/css/style.css ">
11+ < style >
12+ .highlight2 {
13+ padding : 1rem ;
14+ background-color : # e5e7eb ;
15+ }
16+ </ style >
17+ </ head >
18+
19+ < body >
20+
21+ < div class ="content ">
22+
23+ < h2 class ="content-title ">
24+ MDL4OW Few-shot Hyperspectral Image Classification With Unknown
25+ Classes Using Multitask Deep Learning.
26+ </ h2 >
27+
28+ < h4 > Open-Set Hyperspectral Image Classification. </ h4 >
29+
30+ < p class ="content-meta "> Source code and annotations for:</ p >
31+ < p class ="highlight2 "> Shengjie Liu, Qian Shi, and Liangpei Zhang. Few-shot Hyperspectral Image Classification
32+ With Unknown Classes Using Multitask Deep Learning. IEEE TGRS, 2020. < a
33+ href ="https://doi.org/10.1109/TGRS.2020.3018879 " target ="_blank "> doi:10.1109/TGRS.2020.3018879</ a > </ p >
34+
35+ < p class ="content-meta "> Contact: skrisliu AT gmail.com</ p >
36+
37+ < p class ="content-meta " style ="font-size: 1.1em; text-align: left; margin: 1.5em 0; ">
38+ Code and annotations are released here, or check out < a href ="https://github.com/skrisliu/MDL4OW "
39+ target ="_blank "> https://github.com/skrisliu/MDL4OW</ a >
40+ </ p >
41+
42+ < hr >
43+ < h2 > Overview</ h2 >
44+ < h3 > Ordinary: misclassify road, house, helicopter, and truck</ h3 >
45+ < p >
46+ Below is a normal/closed classification. If you are familiar with hyperspectral data, you will notice some
47+ of the materials are not represented in the training samples. For example, for the upper image (Salinas
48+ Valley), the road and the houses between farmlands cannot be classified into any of the known classes. But
49+ still, a deep learning model has to assign one of the labels, because it is never taught to identify an
50+ unknown instance.
51+ </ p >
52+
53+ < p >
54+ < a href ="im/mdl4ow1.png " target ="_blank ">
55+ < img src ="im/mdl4ow1.png " alt ="ordinary classification " width ="50% ">
56+ </ a >
57+ </ p >
58+
59+ < h3 > What we do: mask out the unknown in black</ h3 >
60+ < p >
61+ What we do here is, by using multitask deep learning, empowering the deep learning model with the ability to
62+ identify the unknown: those masked with black color.< br >
63+ For the upper image (Salinas Valley), the roads and houses between farmlands are successfully
64+ identified.< br >
65+ For the lower image (University of Pavia Campus), helicopters and trucks are successfully identified.
66+ </ p >
67+
68+ < p >
69+ < a href ="im/mdl4ow2.png " target ="_blank ">
70+ < img src ="im/mdl4ow2.png " alt ="MDL4OW result " width ="50% ">
71+ </ a >
72+ </ p >
73+
74+ < hr >
75+ < h3 > Key packages</ h3 >
76+ < pre class ="highlight2 ">
77+ tensorflow-gpu==1.9
78+ keras==2.1.6
79+ libmr
80+ </ pre >
81+ < p > Tested on Python 3.6, Windows 10</ p >
82+ < p > Recommend Anaconda, Spyder</ p >
83+
84+ < hr >
85+ < h2 > How to use</ h2 >
86+ < h4 > Hyperspectral satellite images</ h4 >
87+ < p > The input image is with size of imx*imy*channel.</ p >
88+ < p > The satellite images are standard data, downloaded here: < a
89+ href ="http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes "
90+ target ="_blank "> http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes</ a > </ p >
91+ < p > The above data is in matlab format, the numpy format can be found here (recommended):< br >
92+ < a href ="https://drive.google.com/file/d/1cEpTuP-trfRuphKWqKHjAaJhek5sqI3C/view?usp=sharing "
93+ target ="_blank "> https://drive.google.com/file/d/1cEpTuP-trfRuphKWqKHjAaJhek5sqI3C/view?usp=sharing</ a >
94+ </ p >
95+
96+ < h4 > Quick usage</ h4 >
97+ < p class ="highlight2 "> python demo_salinas.py</ p >
98+
99+ < h4 > Arguments</ h4 >
100+ < div class ="highlight2 ">
101+ < p > < strong > Command-line Arguments:</ strong > </ p >
102+ < ul >
103+ < li >
104+ < code > --nos</ code > : Number of training samples per class< br >
105+ < small > 20 for few-shot learning, 200 for many-shot learning</ small >
106+ </ li >
107+ < li >
108+ < code > --key</ code > : Dataset name< br >
109+ < small > Options: < code > 'salinas'</ code > , < code > 'paviaU'</ code > , < code > 'indian'</ code > </ small >
110+ </ li >
111+ < li >
112+ < code > --gt</ code > : Path to ground truth file
113+ </ li >
114+ < li >
115+ < code > --closs</ code > : Classification loss weight< br >
116+ < small > Default: < code > 50</ code > (equivalent to 0.5 in normalized scale)</ small >
117+ </ li >
118+ < li >
119+ < code > --patience</ code > : Early stopping patience< br >
120+ < small > Stop training if loss doesn't decrease for < code > {patience}</ code > consecutive epochs</ small >
121+ </ li >
122+ < li >
123+ < code > --output</ code > : Directory path to save output files< br >
124+ < small > Includes: trained model, prediction probabilities, predicted labels, reconstruction
125+ loss</ small >
126+ </ li >
127+ < li >
128+ < code > --showmap</ code > : Save classification map as image< br >
129+ < small > When enabled, generates and saves the predicted label map visualization</ small >
130+ </ li >
131+ </ ul >
132+ </ div >
133+
134+ < hr >
135+ < h3 > Evaluation code updated on 18 May 2021</ h3 >
136+ < p > When using the evaluation code "< code > z20210518a_readoa.py</ code > ", you should change the parameter
137+ "< code > mode</ code > " for different settings. The inputs are output files from the training script.</ p >
138+
139+ < h4 > Mode</ h4 >
140+ < div class ="highlight2 ">
141+ < p > < strong > Mode Selection:</ strong > </ p >
142+ < ul >
143+ < li > < code > mode == 0</ code > : Closed-set classification</ li >
144+ < li > < code > mode == 1</ code > : MDL4OW (Minimum Distance to Learned Open-set Weights)</ li >
145+ < li > < code > mode == 2</ code > : MDL4OW/C (with confidence calibration)</ li >
146+ < li > < code > mode == 3</ code > : Closed-set with probability output</ li >
147+ < li > < code > mode == 4</ code > : Softmax with threshold</ li >
148+ < li > < code > mode == 5</ code > : OpenMax (for open-set recognition)</ li >
149+ </ ul >
150+ </ div >
151+
152+ </ div >
153+
154+ </ body >
155+
156+ </ html >
0 commit comments