I'd imagine to mitigate the inherent issues with AI they may be interpolating. Which may look relatively the same, but final result may be missing frames. As for motion you'd probably not want to feed any post-edited videos for the most part cause of the matrixing that appears in motion blurs, or stuff liker FX. You'd probably get best results applying all that to do upscaled video of the RAW.
I'm using 6x upscale on that mount st Helens to really show the AI's interpretation of the images, and going to turn those into a video. We'll see what the results are like. Going from 1280x720 to 7680x4320. That should really make it dance if there is huge differences in how the AI sees the image each frame. If the AI works like they say it does, by the time the first frame is done, the AI engine should now be somewhat familiar with that landscape to work on a next frame, and then start better understanding what's going on in the whole deal as it progresses.
Meaning a second attempt may yield better results after training their AI.
UPDATE: Surprisingly, it works very well. The full 7680x4320 video actually looked great, though played like a slideshow on my GPU lol Here it is attached scaled down to 2560x1440 and compressed a bit for TG forums (original images were 1280x720). You can see the areas Topaz did it's magic, and they are stable through motion.
PS I thought the low level jitter of surface detail was Topaz Labs, but it's not, it's in the original timelapse video too, it's from the sequence images.
PPS I put he FPS to 10 instead of 21 to hopefully see any radical changes more, so sorry for the stutter, it's on purpose.