项目作者: hyk1996

项目描述 :
A simple baseline implemented in PyTorch for pedestrian attribute recognition task, evaluating on Market-1501 and DukeMTMC-reID dataset.
高级语言: Python
项目地址: git://github.com/hyk1996/Person-Attribute-Recognition-MarketDuke.git


Person-Attribute-Recognition-MarketDuke

A simple baseline implemented in PyTorch for pedestrian attribute recognition task, evaluating on Market-1501-attribute and DukeMTMC-reID-attribute dataset.

Dataset

You can get Market-1501-attribute and DukeMTMC-reID-attribute annotations from here. Also you need to download Market-1501 and DukeMTMC-reID dataset.

Then, create a folder named ‘attribute’ under your dataset path, and put corresponding annotations into the folder.

For example,

  1. ├── dataset
  2. ├── DukeMTMC-reID
  3. ├── bounding_box_test
  4. ├── bounding_box_train
  5. ├── query
  6. ├── attribute
  7. ├── duke_attribute.mat

Model

Trained model are provided. You may download it from Google Drive or Baidu Drive (提取码:jpks).

You may download it and move checkpoints folder to your project’s root directory.

Dependencies

  • Python 3.5
  • PyTorch >= 0.4.1
  • torchvision >= 0.2.1
  • matplotlib, sklearn, prettytable (optional)

Usage

  1. python3 train.py --data-path ~/dataset --dataset [market | duke] --model resnet50 [--use-id]
  2. python3 test.py --data-path ~/dataset --dataset [market | duke] --model resnet50 [--print-table]
  3. python3 inference.py test_sample/test_market.jpg [--dataset market] [--model resnet50]

Result

We use binary classification settings (considered each attribute as an independent binary classification problem), and the classification threshold is 0.5.

Note that the precision, recall and f1 score are denoted as ‘-‘ for some ill-defined cases.

  1. +------------+----------+-----------+--------+----------+
  2. | attribute | accuracy | precision | recall | f1 score |
  3. +------------+----------+-----------+--------+----------+
  4. | young | 0.998 | 0.533 | 0.267 | 0.356 |
  5. | teenager | 0.892 | 0.927 | 0.951 | 0.939 |
  6. | adult | 0.895 | 0.582 | 0.450 | 0.508 |
  7. | old | 0.992 | 0.037 | 0.012 | 0.019 |
  8. | backpack | 0.883 | 0.828 | 0.672 | 0.742 |
  9. | bag | 0.790 | 0.608 | 0.378 | 0.467 |
  10. | handbag | 0.893 | 0.254 | 0.065 | 0.104 |
  11. | clothes | 0.946 | 0.956 | 0.984 | 0.970 |
  12. | down | 0.945 | 0.968 | 0.949 | 0.959 |
  13. | up | 0.936 | 0.938 | 0.998 | 0.967 |
  14. | hair | 0.877 | 0.871 | 0.773 | 0.819 |
  15. | hat | 0.982 | 0.812 | 0.505 | 0.623 |
  16. | gender | 0.919 | 0.947 | 0.864 | 0.903 |
  17. | upblack | 0.954 | 0.859 | 0.790 | 0.823 |
  18. | upwhite | 0.926 | 0.846 | 0.882 | 0.863 |
  19. | upred | 0.974 | 0.904 | 0.840 | 0.871 |
  20. | uppurple | 0.985 | 0.703 | 0.815 | 0.755 |
  21. | upyellow | 0.976 | 0.895 | 0.836 | 0.865 |
  22. | upgray | 0.909 | 0.852 | 0.391 | 0.537 |
  23. | upblue | 0.946 | 0.868 | 0.420 | 0.566 |
  24. | upgreen | 0.966 | 0.790 | 0.713 | 0.750 |
  25. | downblack | 0.879 | 0.815 | 0.889 | 0.850 |
  26. | downwhite | 0.956 | 0.608 | 0.550 | 0.578 |
  27. | downpink | 0.989 | 0.795 | 0.782 | 0.788 |
  28. | downpurple | 1.000 | - | - | - |
  29. | downyellow | 0.999 | 0.000 | 0.000 | 0.000 |
  30. | downgray | 0.878 | 0.756 | 0.443 | 0.559 |
  31. | downblue | 0.861 | 0.762 | 0.446 | 0.563 |
  32. | downgreen | 0.978 | 0.766 | 0.295 | 0.426 |
  33. | downbrown | 0.958 | 0.754 | 0.590 | 0.662 |
  34. +------------+----------+-----------+--------+----------+
  35. Average accuracy: 0.9361
  36. Average f1 score: 0.6492
  1. +-----------+----------+-----------+--------+----------+
  2. | attribute | accuracy | precision | recall | f1 score |
  3. +-----------+----------+-----------+--------+----------+
  4. | backpack | 0.829 | 0.794 | 0.926 | 0.855 |
  5. | bag | 0.836 | 0.496 | 0.287 | 0.364 |
  6. | handbag | 0.935 | 0.469 | 0.073 | 0.126 |
  7. | boots | 0.905 | 0.784 | 0.791 | 0.787 |
  8. | gender | 0.858 | 0.806 | 0.828 | 0.817 |
  9. | hat | 0.898 | 0.883 | 0.680 | 0.768 |
  10. | shoes | 0.916 | 0.756 | 0.414 | 0.535 |
  11. | top | 0.893 | 0.590 | 0.381 | 0.463 |
  12. | upblack | 0.821 | 0.827 | 0.903 | 0.864 |
  13. | upwhite | 0.959 | 0.750 | 0.509 | 0.606 |
  14. | upred | 0.973 | 0.745 | 0.649 | 0.694 |
  15. | uppurple | 0.995 | 0.258 | 0.123 | 0.167 |
  16. | upgray | 0.900 | 0.611 | 0.333 | 0.432 |
  17. | upblue | 0.943 | 0.766 | 0.519 | 0.619 |
  18. | upgreen | 0.975 | 0.463 | 0.403 | 0.431 |
  19. | upbrown | 0.980 | 0.481 | 0.328 | 0.390 |
  20. | downblack | 0.787 | 0.740 | 0.807 | 0.772 |
  21. | downwhite | 0.945 | 0.771 | 0.395 | 0.522 |
  22. | downred | 0.991 | 0.739 | 0.645 | 0.689 |
  23. | downgray | 0.927 | 0.471 | 0.238 | 0.317 |
  24. | downblue | 0.807 | 0.741 | 0.669 | 0.703 |
  25. | downgreen | 0.997 | - | - | - |
  26. | downbrown | 0.979 | 0.871 | 0.594 | 0.706 |
  27. +-----------+----------+-----------+--------+----------+
  28. Average accuracy: 0.9152
  29. Average f1 score: 0.5739

Inference

  1. >> python inference.py test_sample/test_market.jpg --dataset market
  2. age: teenager
  3. carrying backpack: no
  4. carrying bag: no
  5. carrying handbag: no
  6. type of lower-body clothing: dress
  7. length of lower-body clothing: short
  8. sleeve length: short sleeve
  9. hair length: long hair
  10. wearing hat: no
  11. gender: female
  12. color of upper-body clothing: white
  13. color of lower-body clothing: white
  14. >> python inference.py test_sample/test_duke.jpg --dataset duke
  15. carrying backpack: no
  16. carrying bag: yes
  17. carrying handbag: no
  18. wearing boots: no
  19. gender: male
  20. wearing hat: no
  21. color of shoes: dark
  22. length of upper-body clothing: short upper body clothing
  23. color of upper-body clothing: black
  24. color of lower-body clothing: blue

Update

20-06-03: Added identity loss for joint optimization; Adjusted the learning rate for better performace.

20-06-03: Updated test.py, settled the issue of ill-defined metrics.

19-09-16: Updated inference.py, fixed the error caused by missing data-transform.

19-09-06: Updated test.py, added F1 score for evaluating.

19-09-03: Added inference.py, thanks @ViswanathaReddyGajjala.

19-08-23: Released trained models.

19-01-09: Fixed the error caused by an update of market and duke attribute dataset.

FAQ

1. Why attribute order in import_Market1501Attribute.py is different for train and test data?

The label order in import_Market1501Attribute.py is consistent with the attribute order of the dataset.

You can load market_attribute.mat in MATLAB and print “market_attribute.train” or “market_attribute.test” to obtain these orders.

2. Why predictions in the Market-1501 dataset have 30 attributes instead of 27?

This repo consider attribute prediction as multiple binary classification, but some attribute have more than two categories.

For example, attribute ‘age’ in Market-1501 has four categories: young(1), teenager(2), adult(3), old(4). So it can be split into four attributes: ‘young’, ‘teenager’, ‘adult’ and ‘old’.

That’s why preds of Market-1501 has 30 attributes.

Reference

[1] Lin Y, Zheng L, Zheng Z, et al. Improving person re-identification by attribute and identity learning[J]. Pattern Recognition, 2019.