项目作者: ai-fast-track

项目描述 :
Time Series package for fastai v2
高级语言: Jupyter Notebook
项目地址: git://github.com/ai-fast-track/timeseries.git
创建时间: 2020-02-04T18:06:18Z
项目社区:https://github.com/ai-fast-track/timeseries

开源协议:Apache License 2.0

下载


timeseries package for fastai2

timeseries is a Timeseries Classification and Regression package for fastai2.

Open In Colab

timeseries package documentation

Installation

There are may ways to install timeseries package. Since timeseries is built using fastai2, there are also different ways to install fastai2. We will show 2 differents ways to install them and explain the motivation behin each one of them.

Method 1 : Editable Version

1A - Installing fastai2

Important :Only if you have not already installed fastai2,install fastai2 by following the steps described there.

1B - Installing timeseries on a local machine

Note :Installing an editable version of a package means that you will install a package from its corresponding github repository on your local machine. By doing so, you can pull the latest version whenever a new version is pushed.
To install timeseries editable package, follow the instructions here below:

  1. git clone https://github.com/ai-fast-track/timeseries.git
  2. cd timeseries
  3. pip install -e .

Method 2 : Non Editable version

Note :Everytime you run the !pip install git+https:// ..., you are installing the package latest version stored on github. > Important :As both fastai2 and timeseries are still under development, this is an easy way to use them in Google Colab or any other online platform. You can also use it on your local machine.

2A - Installing fastai2 from its github repository

  1. # Run this cell to install the latest version of fastai shared on github
  2. !pip install git+https://github.com/fastai/fastai2.git
  1. # Run this cell to install the latest version of fastcore shared on github
  2. !pip install git+https://github.com/fastai/fastcore.git

2B - Installing timeseries from its github repository

  1. # Run this cell to install the latest version of timeseries shared on github
  2. !pip install git+https://github.com/ai-fast-track/timeseries.git

Usage

  1. %reload_ext autoreload
  2. %autoreload 2
  3. %matplotlib inline
  1. The history saving thread hit an unexpected error (DatabaseError('database disk image is malformed',)).History will not be written to the database.
  1. from fastai2.basics import *
  1. from timeseries.all import *

Tutorial on timeseries package for fastai2

Example : NATOS dataset

Description

The data is generated by sensors on the hands, elbows, wrists and thumbs. The data are the x,y,z coordinates for each of the eight locations. The order of the data is as follows:

Right Arm vs Left Arm time series for the ‘Not clear’ Command ((#3) (see picture here above)


Channels (24)

Hand Elbow Hand Elbow
0. Hand tip left, X 6. Elbow left, X 12. Wrist left, X 18. Thumb left, X
1. Hand tip left, Y 7. Elbow left, Y 13. Wrist left, X 19. Thumb left, X
2. Hand tip left, Z 8. Elbow left, Z 14. Wrist left, X 20. Thumb left, X
3. Hand tip righ, X 9. Elbow righ, X 15. Wrist righ, X 21. Thumb righ, X
4. Hand tip righ, Y 10. Elbow righ, Y 16. Wrist righ, X 22. Thumb righ, X
5. Hand tip righ, Z 11. Elbow righ, Z 17. Wrist righ, X 23. Thumb righ, X

Classes (6)

The six classes are separate actions, with the following meaning:

1: I have command 2: All clear 3: Not clear 4: Spread wings 5: Fold wings 6: Lock wings

Downloading and unzipping a time series dataset

  1. dsname = 'NATOPS' #'NATOPS', 'LSST', 'Wine', 'Epilepsy', 'HandMovementDirection'
  1. # url = 'http://www.timeseriesclassification.com/Downloads/NATOPS.zip'
  2. path = unzip_data(URLs_TS.NATOPS)
  3. path
  1. Path('/home/farid/.fastai/data/NATOPS')

Why do I have to concatenate train and test data?

Both Train and Train dataset contains 180 samples each. We concatenate them in order to have one big dataset and then split into train and valid dataset using our own split percentage (20%, 30%, or whatever number you see fit)

  1. fname_train = f'{dsname}_TRAIN.arff'
  2. fname_test = f'{dsname}_TEST.arff'
  3. fnames = [path/fname_train, path/fname_test]
  4. fnames
  1. [Path('/home/farid/.fastai/data/NATOPS/NATOPS_TRAIN.arff'),
  2. Path('/home/farid/.fastai/data/NATOPS/NATOPS_TEST.arff')]
  1. data = TSData.from_arff(fnames)
  2. print(data)
  1. TSData:
  2. Datasets names (concatenated): ['NATOPS_TRAIN', 'NATOPS_TEST']
  3. Filenames: [Path('/home/farid/.fastai/data/NATOPS/NATOPS_TRAIN.arff'), Path('/home/farid/.fastai/data/NATOPS/NATOPS_TEST.arff')]
  4. Data shape: (360, 24, 51)
  5. Targets shape: (360,)
  6. Nb Samples: 360
  7. Nb Channels: 24
  8. Sequence Length: 51
  1. items = data.get_items()
  1. idx = 1
  2. x1, y1 = data.x[idx], data.y[idx]
  3. y1
  1. '3.0'
  1. # You can select any channel to display buy supplying a list of channels and pass it to `chs` argument
  2. # LEFT ARM
  3. # show_timeseries(x1, title=y1, chs=[0,1,2,6,7,8,12,13,14,18,19,20])
  1. # RIGHT ARM
  2. # show_timeseries(x1, title=y1, chs=[3,4,5,9,10,11,15,16,17,21,22,23])
  1. # ?show_timeseries(x1, title=y1, chs=range(0,24,3)) # Only the x axis coordinates
  1. seed = 42
  2. splits = RandomSplitter(seed=seed)(range_of(items)) #by default 80% for train split and 20% for valid split are chosen
  3. splits
  1. ((#288) [304,281,114,329,115,130,338,294,94,310...],
  2. (#72) [222,27,96,253,274,35,160,172,302,146...])

Using Datasets class

Creating a Datasets object

  1. lbl_dict = dict([
  2. ('1.0', 'I have command'),
  3. ('2.0', 'All clear'),
  4. ('3.0', 'Not clear'),
  5. ('4.0', 'Spread wings'),
  6. ('5.0', 'Fold wings'),
  7. ('6.0', 'Lock wings')]
  8. )
  1. tfms = [[ItemGetter(0), ToTensorTS()], [ItemGetter(1), lbl_dict.get, Categorize()]]
  2. # Create a dataset
  3. ds = Datasets(items, tfms, splits=splits)
  1. ax = show_at(ds, 2, figsize=(1,1))
  1. Not clear

svg

Creating a Dataloaders object

1st method : using Datasets object

  1. bs = 128
  2. # Normalize at batch time
  3. tfm_norm = Normalize(scale_subtype = 'per_sample_per_channel', scale_range=(0, 1)) # per_sample , per_sample_per_channel
  4. # tfm_norm = Standardize(scale_subtype = 'per_sample')
  5. batch_tfms = [tfm_norm]
  6. dls1 = ds.dataloaders(bs=bs, val_bs=bs * 2, after_batch=batch_tfms, num_workers=0, device=default_device())
  1. dls1.show_batch(max_n=9, chs=range(0,12,3))

svg

Using DataBlock class

2nd method : using DataBlock and DataBlock.get_items()

  1. tsdb = DataBlock(blocks=(TSBlock, CategoryBlock),
  2. get_items=get_ts_items,
  3. get_x = ItemGetter(0),
  4. get_y = Pipeline([ItemGetter(1), lbl_dict.get]),
  5. splitter=RandomSplitter(seed=seed),
  6. batch_tfms = batch_tfms)
  1. tsdb.summary(fnames)
  1. Setting-up type transforms pipelines
  2. Collecting items from [Path('/home/farid/.fastai/data/NATOPS/NATOPS_TRAIN.arff'), Path('/home/farid/.fastai/data/NATOPS/NATOPS_TEST.arff')]
  3. Found 360 items
  4. 2 datasets of sizes 288,72
  5. Setting up Pipeline: ItemGetter -> ToTensorTS
  6. Setting up Pipeline: ItemGetter -> dict.get -> Categorize
  7. Building one sample
  8. Pipeline: ItemGetter -> ToTensorTS
  9. starting from
  10. ([[-0.540579 -0.54101 -0.540603 ... -0.56305 -0.566314 -0.553712]
  11. [-1.539567 -1.540042 -1.538992 ... -1.532014 -1.534645 -1.536015]
  12. [-0.608539 -0.604609 -0.607679 ... -0.593769 -0.592854 -0.599014]
  13. ...
  14. [ 0.454542 0.449924 0.453195 ... 0.480281 0.45537 0.457275]
  15. [-1.411445 -1.363464 -1.390869 ... -1.468123 -1.368706 -1.386574]
  16. [-0.473406 -0.453322 -0.463813 ... -0.440582 -0.427211 -0.435581]], 2.0)
  17. applying ItemGetter gives
  18. [[-0.540579 -0.54101 -0.540603 ... -0.56305 -0.566314 -0.553712]
  19. [-1.539567 -1.540042 -1.538992 ... -1.532014 -1.534645 -1.536015]
  20. [-0.608539 -0.604609 -0.607679 ... -0.593769 -0.592854 -0.599014]
  21. ...
  22. [ 0.454542 0.449924 0.453195 ... 0.480281 0.45537 0.457275]
  23. [-1.411445 -1.363464 -1.390869 ... -1.468123 -1.368706 -1.386574]
  24. [-0.473406 -0.453322 -0.463813 ... -0.440582 -0.427211 -0.435581]]
  25. applying ToTensorTS gives
  26. TensorTS of size 24x51
  27. Pipeline: ItemGetter -> dict.get -> Categorize
  28. starting from
  29. ([[-0.540579 -0.54101 -0.540603 ... -0.56305 -0.566314 -0.553712]
  30. [-1.539567 -1.540042 -1.538992 ... -1.532014 -1.534645 -1.536015]
  31. [-0.608539 -0.604609 -0.607679 ... -0.593769 -0.592854 -0.599014]
  32. ...
  33. [ 0.454542 0.449924 0.453195 ... 0.480281 0.45537 0.457275]
  34. [-1.411445 -1.363464 -1.390869 ... -1.468123 -1.368706 -1.386574]
  35. [-0.473406 -0.453322 -0.463813 ... -0.440582 -0.427211 -0.435581]], 2.0)
  36. applying ItemGetter gives
  37. 2.0
  38. applying dict.get gives
  39. All clear
  40. applying Categorize gives
  41. TensorCategory(0)
  42. Final sample: (TensorTS([[-0.5406, -0.5410, -0.5406, ..., -0.5630, -0.5663, -0.5537],
  43. [-1.5396, -1.5400, -1.5390, ..., -1.5320, -1.5346, -1.5360],
  44. [-0.6085, -0.6046, -0.6077, ..., -0.5938, -0.5929, -0.5990],
  45. ...,
  46. [ 0.4545, 0.4499, 0.4532, ..., 0.4803, 0.4554, 0.4573],
  47. [-1.4114, -1.3635, -1.3909, ..., -1.4681, -1.3687, -1.3866],
  48. [-0.4734, -0.4533, -0.4638, ..., -0.4406, -0.4272, -0.4356]]), TensorCategory(0))
  49. Setting up after_item: Pipeline: ToTensor
  50. Setting up before_batch: Pipeline:
  51. Setting up after_batch: Pipeline: Normalize
  52. Building one batch
  53. Applying item_tfms to the first sample:
  54. Pipeline: ToTensor
  55. starting from
  56. (TensorTS of size 24x51, TensorCategory(0))
  57. applying ToTensor gives
  58. (TensorTS of size 24x51, TensorCategory(0))
  59. Adding the next 3 samples
  60. No before_batch transform to apply
  61. Collating items in a batch
  62. Applying batch_tfms to the batch built
  63. Pipeline: Normalize
  64. starting from
  65. (TensorTS of size 4x24x51, TensorCategory([0, 3, 1, 3]))
  66. applying Normalize gives
  67. (TensorTS of size 4x24x51, TensorCategory([0, 3, 1, 3]))
  1. # num_workers=0 is Microsoft Windows
  2. dls2 = tsdb.dataloaders(fnames, num_workers=0, device=default_device())
  1. dls2.show_batch(max_n=9, chs=range(0,12,3))

svg

3rd method : using DataBlock and passing items object to the DataBlock.dataloaders()

  1. # getters = [ItemGetter(0), ItemGetter(1)]
  2. tsdb = DataBlock(blocks=(TSBlock, CategoryBlock),
  3. get_x = ItemGetter(0),
  4. get_y = Pipeline([ItemGetter(1), lbl_dict.get]),
  5. splitter=RandomSplitter(seed=seed))
  1. dls3 = tsdb.dataloaders(data.get_items(), batch_tfms=batch_tfms, num_workers=0, device=default_device())
  1. dls3.show_batch(max_n=9, chs=range(0,12,3))

svg

4th method : using TSDataLoaders class and TSDataLoaders.from_files()

  1. dls4 = TSDataLoaders.from_files(fnames=fnames, path=path, batch_tfms=batch_tfms, lbl_dict=lbl_dict, num_workers=0, device=default_device())
  1. dls4.show_batch(max_n=9, chs=range(0,12,3))

svg

Training a Model

  1. # Number of channels (i.e. dimensions in ARFF and TS files jargon)
  2. c_in = get_n_channels(dls2.train) # data.n_channels
  3. # Number of classes
  4. c_out= dls2.c
  5. c_in,c_out
  1. (24, 6)

Creating a model

  1. model = inception_time(c_in, c_out).to(device=default_device())
  2. model
  1. Sequential(
  2. (0): SequentialEx(
  3. (layers): ModuleList(
  4. (0): InceptionModule(
  5. (convs): ModuleList(
  6. (0): Conv1d(24, 32, kernel_size=(39,), stride=(1,), padding=(19,), bias=False)
  7. (1): Conv1d(24, 32, kernel_size=(19,), stride=(1,), padding=(9,), bias=False)
  8. (2): Conv1d(24, 32, kernel_size=(9,), stride=(1,), padding=(4,), bias=False)
  9. )
  10. (maxpool_bottleneck): Sequential(
  11. (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  12. (1): Conv1d(24, 32, kernel_size=(1,), stride=(1,), bias=False)
  13. )
  14. (bn_relu): Sequential(
  15. (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  16. (1): ReLU()
  17. )
  18. )
  19. )
  20. )
  21. (1): SequentialEx(
  22. (layers): ModuleList(
  23. (0): InceptionModule(
  24. (bottleneck): Conv1d(128, 32, kernel_size=(1,), stride=(1,))
  25. (convs): ModuleList(
  26. (0): Conv1d(32, 32, kernel_size=(39,), stride=(1,), padding=(19,), bias=False)
  27. (1): Conv1d(32, 32, kernel_size=(19,), stride=(1,), padding=(9,), bias=False)
  28. (2): Conv1d(32, 32, kernel_size=(9,), stride=(1,), padding=(4,), bias=False)
  29. )
  30. (maxpool_bottleneck): Sequential(
  31. (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  32. (1): Conv1d(128, 32, kernel_size=(1,), stride=(1,), bias=False)
  33. )
  34. (bn_relu): Sequential(
  35. (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  36. (1): ReLU()
  37. )
  38. )
  39. )
  40. )
  41. (2): SequentialEx(
  42. (layers): ModuleList(
  43. (0): InceptionModule(
  44. (bottleneck): Conv1d(128, 32, kernel_size=(1,), stride=(1,))
  45. (convs): ModuleList(
  46. (0): Conv1d(32, 32, kernel_size=(39,), stride=(1,), padding=(19,), bias=False)
  47. (1): Conv1d(32, 32, kernel_size=(19,), stride=(1,), padding=(9,), bias=False)
  48. (2): Conv1d(32, 32, kernel_size=(9,), stride=(1,), padding=(4,), bias=False)
  49. )
  50. (maxpool_bottleneck): Sequential(
  51. (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  52. (1): Conv1d(128, 32, kernel_size=(1,), stride=(1,), bias=False)
  53. )
  54. (bn_relu): Sequential(
  55. (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  56. (1): ReLU()
  57. )
  58. )
  59. (1): Shortcut(
  60. (act_fn): ReLU(inplace=True)
  61. (conv): Conv1d(128, 128, kernel_size=(1,), stride=(1,), bias=False)
  62. (bn): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  63. )
  64. )
  65. )
  66. (3): SequentialEx(
  67. (layers): ModuleList(
  68. (0): InceptionModule(
  69. (bottleneck): Conv1d(128, 32, kernel_size=(1,), stride=(1,))
  70. (convs): ModuleList(
  71. (0): Conv1d(32, 32, kernel_size=(39,), stride=(1,), padding=(19,), bias=False)
  72. (1): Conv1d(32, 32, kernel_size=(19,), stride=(1,), padding=(9,), bias=False)
  73. (2): Conv1d(32, 32, kernel_size=(9,), stride=(1,), padding=(4,), bias=False)
  74. )
  75. (maxpool_bottleneck): Sequential(
  76. (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  77. (1): Conv1d(128, 32, kernel_size=(1,), stride=(1,), bias=False)
  78. )
  79. (bn_relu): Sequential(
  80. (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  81. (1): ReLU()
  82. )
  83. )
  84. )
  85. )
  86. (4): SequentialEx(
  87. (layers): ModuleList(
  88. (0): InceptionModule(
  89. (bottleneck): Conv1d(128, 32, kernel_size=(1,), stride=(1,))
  90. (convs): ModuleList(
  91. (0): Conv1d(32, 32, kernel_size=(39,), stride=(1,), padding=(19,), bias=False)
  92. (1): Conv1d(32, 32, kernel_size=(19,), stride=(1,), padding=(9,), bias=False)
  93. (2): Conv1d(32, 32, kernel_size=(9,), stride=(1,), padding=(4,), bias=False)
  94. )
  95. (maxpool_bottleneck): Sequential(
  96. (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  97. (1): Conv1d(128, 32, kernel_size=(1,), stride=(1,), bias=False)
  98. )
  99. (bn_relu): Sequential(
  100. (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  101. (1): ReLU()
  102. )
  103. )
  104. )
  105. )
  106. (5): SequentialEx(
  107. (layers): ModuleList(
  108. (0): InceptionModule(
  109. (bottleneck): Conv1d(128, 32, kernel_size=(1,), stride=(1,))
  110. (convs): ModuleList(
  111. (0): Conv1d(32, 32, kernel_size=(39,), stride=(1,), padding=(19,), bias=False)
  112. (1): Conv1d(32, 32, kernel_size=(19,), stride=(1,), padding=(9,), bias=False)
  113. (2): Conv1d(32, 32, kernel_size=(9,), stride=(1,), padding=(4,), bias=False)
  114. )
  115. (maxpool_bottleneck): Sequential(
  116. (0): MaxPool1d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)
  117. (1): Conv1d(128, 32, kernel_size=(1,), stride=(1,), bias=False)
  118. )
  119. (bn_relu): Sequential(
  120. (0): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  121. (1): ReLU()
  122. )
  123. )
  124. (1): Shortcut(
  125. (act_fn): ReLU(inplace=True)
  126. (conv): Conv1d(128, 128, kernel_size=(1,), stride=(1,), bias=False)
  127. (bn): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  128. )
  129. )
  130. )
  131. (6): AdaptiveConcatPool1d(
  132. (ap): AdaptiveAvgPool1d(output_size=1)
  133. (mp): AdaptiveMaxPool1d(output_size=1)
  134. )
  135. (7): Flatten(full=False)
  136. (8): Linear(in_features=256, out_features=6, bias=True)
  137. )

Creating a Learner object

  1. # opt_func = partial(Adam, lr=3e-3, wd=0.01)
  2. #Or use Ranger
  3. def opt_func(p, lr=slice(3e-3)): return Lookahead(RAdam(p, lr=lr, mom=0.95, wd=0.01))
  1. #Learner
  2. loss_func = LabelSmoothingCrossEntropy()
  3. learn = Learner(dls2, model, opt_func=opt_func, loss_func=loss_func, metrics=accuracy)
  4. print(learn.summary())
  1. Sequential (Input shape: ['64 x 24 x 51'])
  2. ================================================================
  3. Layer (type) Output Shape Param # Trainable
  4. ================================================================
  5. Conv1d 64 x 32 x 51 29,952 True
  6. ________________________________________________________________
  7. Conv1d 64 x 32 x 51 14,592 True
  8. ________________________________________________________________
  9. Conv1d 64 x 32 x 51 6,912 True
  10. ________________________________________________________________
  11. MaxPool1d 64 x 24 x 51 0 False
  12. ________________________________________________________________
  13. Conv1d 64 x 32 x 51 768 True
  14. ________________________________________________________________
  15. BatchNorm1d 64 x 128 x 51 256 True
  16. ________________________________________________________________
  17. ReLU 64 x 128 x 51 0 False
  18. ________________________________________________________________
  19. Conv1d 64 x 32 x 51 4,128 True
  20. ________________________________________________________________
  21. Conv1d 64 x 32 x 51 39,936 True
  22. ________________________________________________________________
  23. Conv1d 64 x 32 x 51 19,456 True
  24. ________________________________________________________________
  25. Conv1d 64 x 32 x 51 9,216 True
  26. ________________________________________________________________
  27. MaxPool1d 64 x 128 x 51 0 False
  28. ________________________________________________________________
  29. Conv1d 64 x 32 x 51 4,096 True
  30. ________________________________________________________________
  31. BatchNorm1d 64 x 128 x 51 256 True
  32. ________________________________________________________________
  33. ReLU 64 x 128 x 51 0 False
  34. ________________________________________________________________
  35. Conv1d 64 x 32 x 51 4,128 True
  36. ________________________________________________________________
  37. Conv1d 64 x 32 x 51 39,936 True
  38. ________________________________________________________________
  39. Conv1d 64 x 32 x 51 19,456 True
  40. ________________________________________________________________
  41. Conv1d 64 x 32 x 51 9,216 True
  42. ________________________________________________________________
  43. MaxPool1d 64 x 128 x 51 0 False
  44. ________________________________________________________________
  45. Conv1d 64 x 32 x 51 4,096 True
  46. ________________________________________________________________
  47. BatchNorm1d 64 x 128 x 51 256 True
  48. ________________________________________________________________
  49. ReLU 64 x 128 x 51 0 False
  50. ________________________________________________________________
  51. ReLU 64 x 128 x 51 0 False
  52. ________________________________________________________________
  53. Conv1d 64 x 128 x 51 16,384 True
  54. ________________________________________________________________
  55. BatchNorm1d 64 x 128 x 51 256 True
  56. ________________________________________________________________
  57. Conv1d 64 x 32 x 51 4,128 True
  58. ________________________________________________________________
  59. Conv1d 64 x 32 x 51 39,936 True
  60. ________________________________________________________________
  61. Conv1d 64 x 32 x 51 19,456 True
  62. ________________________________________________________________
  63. Conv1d 64 x 32 x 51 9,216 True
  64. ________________________________________________________________
  65. MaxPool1d 64 x 128 x 51 0 False
  66. ________________________________________________________________
  67. Conv1d 64 x 32 x 51 4,096 True
  68. ________________________________________________________________
  69. BatchNorm1d 64 x 128 x 51 256 True
  70. ________________________________________________________________
  71. ReLU 64 x 128 x 51 0 False
  72. ________________________________________________________________
  73. Conv1d 64 x 32 x 51 4,128 True
  74. ________________________________________________________________
  75. Conv1d 64 x 32 x 51 39,936 True
  76. ________________________________________________________________
  77. Conv1d 64 x 32 x 51 19,456 True
  78. ________________________________________________________________
  79. Conv1d 64 x 32 x 51 9,216 True
  80. ________________________________________________________________
  81. MaxPool1d 64 x 128 x 51 0 False
  82. ________________________________________________________________
  83. Conv1d 64 x 32 x 51 4,096 True
  84. ________________________________________________________________
  85. BatchNorm1d 64 x 128 x 51 256 True
  86. ________________________________________________________________
  87. ReLU 64 x 128 x 51 0 False
  88. ________________________________________________________________
  89. Conv1d 64 x 32 x 51 4,128 True
  90. ________________________________________________________________
  91. Conv1d 64 x 32 x 51 39,936 True
  92. ________________________________________________________________
  93. Conv1d 64 x 32 x 51 19,456 True
  94. ________________________________________________________________
  95. Conv1d 64 x 32 x 51 9,216 True
  96. ________________________________________________________________
  97. MaxPool1d 64 x 128 x 51 0 False
  98. ________________________________________________________________
  99. Conv1d 64 x 32 x 51 4,096 True
  100. ________________________________________________________________
  101. BatchNorm1d 64 x 128 x 51 256 True
  102. ________________________________________________________________
  103. ReLU 64 x 128 x 51 0 False
  104. ________________________________________________________________
  105. ReLU 64 x 128 x 51 0 False
  106. ________________________________________________________________
  107. Conv1d 64 x 128 x 51 16,384 True
  108. ________________________________________________________________
  109. BatchNorm1d 64 x 128 x 51 256 True
  110. ________________________________________________________________
  111. AdaptiveAvgPool1d 64 x 128 x 1 0 False
  112. ________________________________________________________________
  113. AdaptiveMaxPool1d 64 x 128 x 1 0 False
  114. ________________________________________________________________
  115. Flatten 64 x 256 0 False
  116. ________________________________________________________________
  117. Linear 64 x 6 1,542 True
  118. ________________________________________________________________
  119. Total params: 472,742
  120. Total trainable params: 472,742
  121. Total non-trainable params: 0
  122. Optimizer used: <function opt_func at 0x7fb11c99f400>
  123. Loss function: LabelSmoothingCrossEntropy()
  124. Callbacks:
  125. - TrainEvalCallback
  126. - Recorder
  127. - ProgressCallback

LR find

  1. lr_min, lr_steep = learn.lr_find()
  2. lr_min, lr_steep




  1. (0.00831763744354248, 0.0006918309954926372)

svg

Train

  1. learn.fit_one_cycle(25, lr_max=1e-3)



























































































































































































epoch train_loss valid_loss accuracy time
0 3.001498 1.795478 0.222222 00:01
1 2.909164 1.799713 0.222222 00:01
2 2.758937 1.805732 0.222222 00:01
3 2.552927 1.810526 0.222222 00:01
4 2.272452 1.817920 0.180556 00:02
5 1.995428 1.829209 0.111111 00:02
6 1.776214 1.749636 0.222222 00:01
7 1.597963 1.653429 0.347222 00:02
8 1.453098 1.463801 0.444444 00:02
9 1.337819 1.185544 0.666667 00:01
10 1.241440 0.982497 0.777778 00:02
11 1.160481 0.845832 0.819444 00:02
12 1.089517 0.751684 0.833333 00:02
13 1.026505 0.733695 0.833333 00:02
14 0.973174 0.693617 0.861111 00:02
15 0.926334 0.686428 0.805556 00:02
16 0.884449 0.684725 0.875000 00:02
17 0.848235 0.659447 0.833333 00:02
18 0.814864 0.654701 0.847222 00:02
19 0.784517 0.654098 0.875000 00:02
20 0.757529 0.648219 0.875000 00:02
21 0.732877 0.649778 0.861111 00:02
22 0.710833 0.644054 0.875000 00:02
23 0.691595 0.641094 0.875000 00:02
24 0.674118 0.639970 0.861111 00:02

Ploting the loss function

  1. learn.recorder.plot_loss()

svg

Showing the results

  1. learn.show_results(max_n=9, chs=range(0,12,3))




svg

Showing the confusion matrix

  1. interp = ClassificationInterpretation.from_learner(learn)
  2. interp.plot_confusion_matrix(figsize=(10,8))




svg