项目作者: akshay-gupta123

项目描述 :
A Python toolbox to create adversarial examples that fool neural networks in PyTorch.
高级语言: Python
项目地址: git://github.com/akshay-gupta123/moorkh.git
创建时间: 2021-06-04T12:19:27Z
项目社区:https://github.com/akshay-gupta123/moorkh

开源协议:MIT License

下载


moorkh : Adversarial Attacks in Pytorch

moorkh is a Pytorch library for generating adversarial examples with full support for batches of images in all attacks.

About the name

The name moorkh is a Hindi word meaning Fool in English, that’s what we are making to Neural networks by generating advesarial examples. Although we also do so for making them more robust.

Usage

Installation

  • pip install moorkh or
  • git clone https://github.com/akshay-gupta123/moorkh
  1. import moorkh
  2. norm_layer = moorkh.Normalize(mean,std)
  3. model = nn.Sequential(
  4. norm_layer,
  5. model
  6. )
  7. model.eval()
  8. attak = moorkh.FGSM(model)
  9. adversarial_images = attack(images, labels)

Implemented Attacks

To-Do’s

  • Adding more Attacks
  • Making Documentation
  • Adding demo notebooks
  • Adding Summaries of Implemented papers(for my own undestanding)

Contribution

This library is developed as a part of my learning, if you find any bug feel free to create a PR. All kind of contributions are always welcome!

References