A Python toolbox to create adversarial examples that fool neural networks in PyTorch.
moorkh is a Pytorch library for generating adversarial examples with full support for batches of images in all attacks.
The name moorkh is a Hindi word meaning Fool in English, that’s what we are making to Neural networks by generating advesarial examples. Although we also do so for making them more robust.
pip install moorkh
orgit clone https://github.com/akshay-gupta123/moorkh
import moorkh
norm_layer = moorkh.Normalize(mean,std)
model = nn.Sequential(
norm_layer,
model
)
model.eval()
attak = moorkh.FGSM(model)
adversarial_images = attack(images, labels)
EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES: FGSM
ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD: IFGSM
ON THE LIMITATION OF CONVULATIONSAL NEURAL NETWORK IN RECOGNIZING NEGATIVE IMAGES: Semantic
ADDING NOISE: Noise
TOWARDS DEEP LEARNING MODELS RESISTANT TO ADVERSARIAL ATTACKS: PGD\L2
ESEMBLE ADVERSAIAL TRAINING: ATTACKS and DEFENSE: RFGSM
This library is developed as a part of my learning, if you find any bug feel free to create a PR. All kind of contributions are always welcome!