An implementation of the neural network described in "Convolution Based Spectral Partitioning Architecture for Hyperspectral Image Classification"
This is an implementation code for Convolution Based Spectral Partitioning Architecture for Hyperspectral Image Classification
published at International Geoscience and Remote Sensing symposium 2019.
If you have any question related to the paper or the code, feel free to contact Ringo Chu (ringo.chu.16@ucl.ac.uk)
This is a neural network architecture using 3D Convolutional neural network with partitioning on \lamba dimension to classify unlabelled pixel on Hyperspectral images,
and achieving the state-of-art performance on classification with labelled dataset on Indian Pines, Salinas scenes.
We recommend you to create a Python Virtual environment, issue the
command pip install -r requirement.txt
in your command prompt/terminal to install all dependencies required.
If you use Anaconda, you could issue this command to create a virtual environment: conda env create --file SpecPatConv3D.yml
The code will be soon updated to TensorFlow 2.0 and will also be providing a PyTorch implementation, hold on tight and get updated!
All programs are texted under Ubuntu, MacOS, Windows10. For windows user, you’ll need to download dataset manually with instructions below.
Acquire the Dataset (Do this Step if you’re using Windows)
data
Preprocess and prepare dataset
python preprocess.py --data Indian_pines --train_ratio 0.15 --validation_ratio 0.05
Train the model
python train.py --data Indian_pines --epoch 650
Evaluate the model
python evaluate.py --data Indian_pines
If you find paper helpful, please consider citing us.❤
@inproceedings{igarss19chu,
author = {Ringo S.W. Chu and Ho-Chung Ng and Xiwen Wang and Wayne Luk},
title = {Convolution Based Spectral Partitioning Architecture for Hyperspectral Image Classification},
booktitle = {{IEEE Geoscience and Remote Sensing symposium}},
year = {2019}
}