[IN PROGRESS] Multimodal feature extraction modules for ease of doing research and reproducibility.
This repo contains a collection of feature extractors for multimodal emotion recognition.
Clone this repository:
$ git clone --recurse-submodules https://github.com/gangeshwark/multimodal_feature_extractors.git
Currently, these modalities are covered:
This feature extractor contains uses Openface to extract and align faces and uses Face VGG to extract facial features from every frame.
Module:from src.video.models import OpenFace_VGG
in your data processing code.