项目作者: LafLaurine

项目描述 :
Deepfake detection algorithms
高级语言: JavaScript
项目地址: git://github.com/LafLaurine/imac2-projetTUT.git
创建时间: 2019-12-16T21:19:30Z
项目社区:https://github.com/LafLaurine/imac2-projetTUT

开源协议:MIT License

下载


Deepfake detection tools integrated to WeVerify InVID plugin

InVIDPlugin

Table of Contents

About The Project

The purpose of this project is to detect deepfakes videos thanks to several methods that already exist (MesoNet, CapsuleForensics) and include it to the WeVerify InVID plugin project.

This repository includes sources that can be run with the help of Docker tools, to train neural networks and/or use them to detect deepfakes.

It can be used as a standalone or with the WeVerify InVID plugin.

Built With

  • Python - Python is a programming language that lets you work quickly and integrate systems more effectively
  • Docker - Docker is an open source software allowing to launch applications in software containers
  • docker-compose - A tool for defining and running multi-container Docker applications
  • Flask - Open-source web development framework in Python
  • Maven - Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project’s build, reporting and documentation from a central piece of information
  • SpringBoot - Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can “just run”
  • React - A JavaScript library for building user interfaces

    Getting Started

    To get a local copy running, follow these steps.

    Prerequisites

    This project has been tested on Linux Ubuntu 18.04 and 19.04. It should run normally on Windows.

    Docker

    Make sure Docker is intalled on your environment.

    1. docker --version

    If not, install it by following these instructions :

  • For Ubuntun 18.04 : How to install Docker on Ubuntu 18.04
  • For Windows : Get started with Docker for Windows

    Docker-compose

  • For Ubuntu :
    You will also need to install the docker-compose tool.

    1. sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    2. sudo chmod +x /usr/local/bin/docker-compose
    3. docker-compose --version

    You can get the latest version of docker-compose in the documentation : Install docker-compose

  • For Windows : Install Docker compose

    Unfortunatly because of the current situation, docker images couldn’t be updated

    DockerHub

    JDK 1.8 or above

  • For Ubuntu : https://tecadmin.net/install-oracle-java-8-ubuntu-via-ppa/
  • For Windows : https://www.oracle.com/java/technologies/javase-jdk8-downloads.html

    Maven 3.2 or above

  • For Ubuntu : https://linuxize.com/post/how-to-install-apache-maven-on-ubuntu-18-04/
  • For Windows : https://maven.apache.org/guides/getting-started/windows-prerequisites.html

    Plugin

    NodeJS

    You need to install NodeJs and run npm install on this folder on your first download to get dependencies.

    Services

    Quick overview of all implemented services. Each services works on his own.

  • extraction_dir : extract faces from multiple videos that are in a directory / Works on : localhost:8080/extraction/dir
  • extraction_video : extract faces from a video / Works on localhost:8080/extraction/video
  • mesonet_test : check if MesoNet neural network is well trained / Works on localhost:8080/mesonet_test
  • mesonet_analyse : after faces extracted, check if a video is a deepfake or not / Works on localhost:8080/mesonet_analyse
  • mesonet_training : train MesoNet neural network / Works on localhost:8080/mesonet_training
  • capsule_forensics_test : check if CapsuleForensics neural network is well trained / Works on localhost:8080/capsule_forensics_test
  • capsule_forensics_analyse : after faces extracted, check if a video is a deepfake or not / Works on localhost:8080/capsule_forensics_analyse
  • capsule_forensics_training : train CapsuleForensics neural network / Works on localhost:8080/capsule_forensics_training

    If you go to fakedetection/src/main/docker/extraction you’ll see a file that is named display_faces_capture.py, it doesn’t have a service because his only role is showcase. You will need a webcam in order to use this application.
    You can run it with :

    1. python3 display_faces_capture.py --method DNN or python3 display_faces_capture.py --method DNN_TRACKING

    This application allows you to see how we can extract faces : when the application is running, you’ll can use r and l on your keyboard to see the landmarks and the rectangle of your face, which are used to extract faces.

    Installation

    Standalone

  1. Clone the repository
    1. git clone https://github.com/laflaurine/imac2-projetTUT.git
    2. cd imac2-projetTUT
  2. Move into the fakedetection/ directory where you can find the spring boot project
    1. cd fakedetection
  3. Run the project by using the following command:
    1. sudo mvn compile
    2. sudo mvn package
    3. sudo mvn install
    4. java -jar target/fakedetection-0.0.1-SNAPSHOT.jar
  4. Move into the fakedetection/src/main/docker directory where you can find the docker-compose.yml file
    1. cd fakedetection/src/main/docker
  5. You can either run all services at once :

    1. sudo docker-compose up

    Or run them one by one (if you want to run it with your own option and not with the default) :

    1. sudo docker-compose up name_of_the_service
  6. The service should be running on their own host, you’ll see in the console which port is running or you can copy the host from above. Example : go to http://localhost:8080/extraction/dir

    For detailed information on how to run each service, please refer to the example usages below.

    Plugin

  7. Clone the repository

    1. git clone https://github.com/laflaurine/imac2-projetTUT.git
    2. cd imac2-projetTUT
  8. You will need a .env file containing :

    1. REACT_APP_ELK_URL=<ELK-URL>/twinttweets/_search
    2. REACT_APP_TWINT_WRAPPER_URL=<TWINT-WRAPPER-URL>
    3. REACT_APP_FACEBOOK_APP_ID=<REACT_ID>
    4. REACT_APP_TRANSLATION_GITHUB=https://raw.githubusercontent.com/AFP-Medialab/InVID-Translations/react/
    5. REACT_APP_KEYFRAME_TOKEN=<yourKeyframeToken>
    6. REACT_APP_MY_WEB_HOOK_URL=<yourSlackAppUrlHook>
    7. REACT_APP_GOOGLE_ANALYTICS_KEY=<yourGoogleAnaliticsToken>
    8. REACT_APP_MAP_TOKEN=<MAP_TOKEN>
    9. REACT_APP_AUTH_BASE_URL=<TWINT-WRAPPER-URL>
  9. Run npm run build to build the app for production to the build folder.

  10. Run: npm start in order to run the app in the development mode. This will run on port 3000.

  11. Use the extension
    For Chrome :

  • In chrome menu go to More tools then click Extentions
  • Activate the Developer mode toggle
  • The click the Load Unpacked button
  • Select the dev or build file you generated earlier.

    For Firefox :

  • In firefox menu click on Add-ons
  • Then click on the gear button ⚙⌄
  • Then click on Debug Add-ons
  • Then click on Load Temporary Add-on...
  • Select the manifest.json in the dev or build file you generated earlier.

    Detailed plugin functionality can be found on the WeVerify - InVID github

    Unfortunalty, you still have to follow the Usage of the services section, as the back-end doesn’t work fully alone.

    Usage of the services

    Arguments

    The arguments used by each services are declared by default in the fakedetection/src/main/docker/.env file.

    You would probably set your own paths when running the services to better fit your working environment. You can modify any of these variables by declaring them when running the command (as you can see below).

    Default arguments:

    1. input_path=input/video.mp4 # Path to the video you want to analyze
    2. video_download=False # Boolean value. Is needed True if you want to download a video
    3. video_url =https://www.youtube.com/watch?v=gLoI9hAX9dw # Video URL that you want to download
    4. name_video_downloaded=video # Name of the video you want to give to the downloaded one
    5. input_path_dir=./input/ # Path to the folder in which needed normalized images are stored
    6. output_path=output # Path to the folder in which needed normalized images will be stored
    7. method_detection=DNN # Can either be DNN or DNN_TRACKING
    8. start_frame=0 # Frame at which to start extracton
    9. step_frame=25 # Extract faces every ... frames
    10. end_frame=200 # Frame at which to end extraction
    11. max_frame=50 # Max of frames to extract
    12. are_warped=True # Faces will be aligned on the basis of eyes and mouth.
    13. are_culled=False # Faces will be culled according to out-of-bounds landmarks.
    14. are_saved_landmarks=False # Facial landmarks will be saved along with the corresponding face.
    15. is_saved_rectangle=False # IF NOT WARPED: Rectangle from face detection will be drawn in output image.
    16. mesonet_classifier=MESO4_DF # Can be Meso4_DF or Meso4_F2F or MesoInception_DF or MesoInception_F2F
    17. number_epochs=3 # Number of epochs
    18. batch_size=8 # Number of images in each batch
    19. path_to_dataset=dataAnalyse/out # Path to the analyse dataset
    20. train_dataset=data # Path to the training dataset
    21. capsule_forensics_classifier=BINARY_DF # Can be BINARY_DF or BINARY_F2F or BINARY_FACESWAP or MULTICLASS
    22. step_save_checkpoint=5 # Step at which to save temporary weights
    23. epoch_resume=1 # Which epoch to resume (starting over if 0)
    24. version_weights=2 # Version of the weights to load (has to be > 0)

    Extraction video

    This service is a useful tool allowing you to extract normalized images that are needed to launch any other services.
    You can either extract faces from a video that you already have in your computer or download one from YouTube or Facebook thanks to youtube_dl.

    Default arguments:

    1. input_path=input/video.mp4
    2. method_detection=DNN
    3. are_saved_landmarks=True
    4. video_download=False
    5. output_path=output
    6. name_video_downloaded=video
    7. are_warped=True
    8. are_culled=False
    9. is_saved_rectangle=False
    10. start_frame=0
    11. step_frame=25
    12. end_frame=200
    13. max_frame=50

    Extract a video that you own

  1. Make sure to put the video you want to analyze in a local folder (it’s better if it’s at the project root).

  2. extraction_video service :
    The input video must be named “video.mp4”

    1. sudo video_download=False input_path=your_path_to_the_video/video.mp4 output_path=your_path_to_your_output_folder docker-compose up extraction_video

    Extract a video from youtube

  3. Make sure to copy the URL from the video that you want. Video name must be “video.mp4”

  4. Run the extraction_video service with the following command :

    1. sudo video_download=True video_url=your_video_url name_video_downloaded=video.mp4 output_path=your_path_to_your_output_folder docker-compose up extraction_video

    You can add other arguments following the same model as above.

    Extraction directory

    This service is a useful tool allowing you to extract all normalized images stored in a directory.

    Default arguments:

    1. input_path_dir=input_dir
    2. method_detection=DNN
    3. are_saved_landmarks=True
    4. output_path=output
    5. are_warped=True
    6. are_culled=False
    7. is_saved_rectangle=False
    8. start_frame=0
    9. step_frame=25
    10. end_frame=200
    11. max_frame=50
  5. Make sure that you have a local folder filled with videos that you want to extract faces. They must be in separated folder. Example : videos/video1; videos/video2

  6. Run the extraction_dir service with the following command :

    1. sudo input_path_dir=your_path_to_the_directory output_path=your_path_to_your_output_folder docker-compose up extraction_dir

    You can add other arguments following the same model as above.

    MesoNet training

    This service is a useful tool allowing you to train MesoNet models

    Default arguments :

    1. mesonet_classifier=MESO4_DF
    2. train_dataset=data
    3. batch_size=8
    4. number_epochs=3
    5. step_save_checkpoint=5

    Once you have extract the needed inputs, you can train the MesoNet method with them.

  7. Make sure your images in the output folder are saved into two subfolders : df (for the images extracted from deepfake videos) and real (for the images extracted from real videos).
    Example : train_dataset/df/your_deepfake_images.PNG and train_dataset/real/your_real_images.PNG

  8. Run the mesonet_training service with the following command :

    1. sudo train_dataset=your_path docker-compose up mesonet_training

    You can add other arguments following the same model as above.

    MesoNet test

    This service is a useful tool allowing you to test MesoNet models

    Default arguments :

    1. train_dataset=data
    2. name_classifier=MESO4_DF
    3. batch_size=8
    4. number_epochs=3
  9. Make sure your images in the output folder are saved into two subfolders : train and validation.
    Example : train_dataset/train/your_deepfake_images.PNG and train_dataset/validation/your_real_images.PNG

  10. Run the mesonet_test service with the following command :

    1. sudo train_dataset=your_path docker-compose up mesonet_test

    You can add other arguments following the same model as above.

    MesoNet analyse

    This service is a useful tool allowing you to analyse whether it’s a deepfake or not with MesoNet method

    Default arguments :

    1. path_to_dataset=dataAnalyse/out
    2. name_classifier=MESO4_DF
    3. batch_size=8
  11. Make sure your images in the output folder are saved into subfolders.
    Example : path_to_dataset/subfolder/images.PNG

  12. Run the mesonet_analyse service with the following command :

    1. sudo path_to_dataset=your_path docker-compose up mesonet_analyse

    You can add other arguments following the same model as above.

CapsuleForensics training

This service is a useful tool allowing you to train CapsuleForensics models

Default arguments

  1. capsule_forensics_classifier=BINARY_DF
  2. train_dataset=data
  3. batch_size=8
  4. number_epochs=3
  5. epoch_resume=1
  6. step_save_checkpoint=5
  1. Make sure your images in the output folder are saved into subfolders.
    Example : train_dataset/subfolder/your_images.PNG

  2. Run the capsule_forensics_training service with the following command :

    1. sudo train_dataset=your_path docker-compose up capsule_forensics_training

    You can add other arguments following the same model as above.

    CapsuleForensics test

    This service is a useful tool allowing you to test CapsuleForensics models

    Default arguments :

    1. capsule_forensics_classifier=BINARY_DF
    2. train_dataset=data
    3. batch_size=8
    4. number_epochs=3
    5. version_weights=2
  3. Make sure your images in the output folder are saved into subfolders.
    Example : train_dataset/subfolder/your_images.PNG

  4. Run the capsule_forensics_test service with the following command :

    1. sudo train_dataset=your_path docker-compose up capsule_forensics_test

    You can add other arguments following the same model as above.

    CapsuleForensics analyse

    This service is a useful tool allowing you to analyse whether it’s a deepfake or not with CapsuleForensics method

    Default arguments :

    1. capsule_forensics_classifier=BINARY_DF
    2. path_to_dataset=dataAnalyse/out
    3. batch_size=8
    4. version_weights=2
  5. Make sure your images in the output folder are saved into subfolders.
    Example : path_to_dataset/subfolder/your_images.PNG

  6. Run the capsule_forensics_analyse service with the following command :

    1. sudo path_to_dataset=your_path docker-compose up capsule_forensics_analyse

    You can add other arguments following the same model as above.

    Authors