photo vs movie
Firstly, this tech seems to be limited to photos and not movies. I would think full-realistic AI generated movies would be far more complex, although given the rate of advancement they surely can't be far behind.
Secondly, I would think that as far as nefarious purposes are concerned, simply being able to generate a photo is limited in scope. Based on what I read of the process, the AI is simply generating pictures that can fool another AI (and only incidentally, humans) into classifying a fake photo as real. As with all of these "AI"s it's not really an AI it's a pattern matcher/generator. It doesn't 'know' anything about what it's generating. It's not like someone can tell the AI to generate the face of a specific gender / age / race / body type etc, or specify a particular background, facial expression etc (short of retraining the dataset with only that type of image, which sort of defeats the purpose of having an AI do it for you). Any fraudster who needs to use a photo of a random face, or even a very specific type of face, can already pick them by the thousands on the web.
What is more concerning is the indistinguishability of photoshopped / deep-fake photos and movies.