Research whose goal is to introduce deep fakes, presenting the state-of-the-art implementations in the area and examples of what such technology is capable of doing.
Deep Learning algorithms are being used in several fields nowadays, such as Self-driving
cars, Healthcare or Voice-Activated Assistants. Programs developed with these algorithms are
getting easier for not experienced users to use and their features are constantly growing. Despite
this being positive for contributing to previous sectors, deep learning-powered applications that
could cause threats in politics, privacy and security are being developed too. An example of this
is the so-called “deep fakes”. This research has the goal of introducing this type of algorithms,
showing state-of-the-art implementations in the area and examples of what this technology is
capable of.
Below is presented a summary of the actual research, the full paper can be found at Deepfakes: Introduction and Latest Implementation or Deepfakes: Introduction and Applications in Digital Health for a more focused report on Digital Health. Also, here is a video summarizing the research presented, in case you prefer the visual to the textual 😄
On top of that, below are displayed some examples from the First Order Motion
Model for Image Animation Demo [13]. It was made in Google Colab modifying the original version
of the demo and generating deepfakes with a picture of myself as the source image. It was tested
in different driving videos, two of them represented in the figure.
The current state in deepfakes is complicated. Every time a new deepfake technology is released,
users will try to develop a technology to detect the images generated with that technology, and
then hackers would try to bypass that detection and ideate a new approach for generating them,
and this will continue with a virus/anti-virus dynamic. On top of that, despite achieving high
accuracy in deepfake detection, for instance, 99%, sometimes that 1% of no detected deepfakes
could compromise users of platforms such as Instagram or Twitter.
Another idea in the area for tackling this issue is to use programs that can automatically watermark and identify
images taken on cameras or implementing blockchain technology to verify content from trusted sources [15].
However, the likely future is that none of these approaches would help ’solve’ this issue, it is an endless competition that can have very critical results in terms of privacy, safety and political concerns. People need to be aware of these technologies as well as verify
each source they read or watch. Besides, governments should take imminent measures for people
generating or sharing deepfakes for unethical uses.
[1] Deep convolutional generative adversarial network : Tensorflow core.
[2] Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., and Li, H. Protecting world
leaders against deep fakes. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition Workshops (2019), pp. 38–45.
[3] Badr, W. Auto-encoder: What is it? and what is it used for? (part 1), Jul 2019.
[4] Brown, N. I. Congress wants to solve deepfakes by 2020. that should worry us., Jul 2019.
[5] Damiani, J. A voice deepfake was used to scam a ceo out of $243,000, Sep 2019.
[6] Kelion, L. Reddit bans deepfake porn videos, Feb 2018.
[7] Korshunov, P., and Marcel, S. Vulnerability assessment and detection of deepfake videos.
In The 12th IAPR International Conference on Biometrics (ICB) (2019), pp. 1–6.
[8] Lathuilière, S., Tulyakov, S., Ricci, E., Sebe, N., et al. Motion-supervised co-part
segmentation. arXiv preprint arXiv:2004.03234 (2020).
[9] Li, Y., Chang, M.-C., and Lyu, S. In ictu oculi: Exposing ai created fake videos by
detecting eye blinking. In 2018 IEEE International Workshop on Information Forensics and
Security (WIFS) (2018), IEEE, pp. 1–7.
[10] Manke, K., and Manke, K. Researchers use facial quirks to unmask ’deepfakes’, Jun 2019.
[11] Marr, B. The best (and scariest) examples of ai-enabled deepfakes, Jul 2019.
[12] Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., and Nahavandi, S.
Deep learning for deepfakes creation and detection. arXiv preprint arXiv:1909.11573 (2019).
[13] Siarohin, A., Lathuilière, S., Tulyakov, S., Ricci, E., and Sebe, N. First order
motion model for image animation. In Advances in Neural Information Processing Systems
32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett, Eds.
Curran Associates, Inc., 2019, pp. 7137–7147.
[14] Suwajanakorn, S., Seitz, S. M., and Kemelmacher-Shlizerman, I. Synthesizing
obama: learning lip sync from audio. ACM Transactions on Graphics (TOG) 36, 4 (2017),
1–13.
[15] Vincent, J. Deepfake detection algorithms will never be enough, Jun 2019.
[16] Wood, C. A deepfake artist’s attempt to make robert de niro look younger in ’the irishman’
is being hailed as superior to netflix’s cgi, Jan 2020.
[17] Xuan, X., Peng, B., Wang, W., and Dong, J. On the generalization of gan image
forensics. In Chinese Conference on Biometric Recognition (2019), Springer, pp. 134–141.
8
[18] Zakharov, E., Shysheya, A., Burkov, E., and Lempitsky, V. Few-shot adversarial
learning of realistic neural talking head models. In Proceedings of the IEEE International
Conference on Computer Vision (2019), pp. 9459–9468.