I did a lot of work with Terragen and photogrammetry a few years back. It sort of works. What we were doing was importing objects, rendering images of them, and then trying to reconstruct the object with photogrammetry. Then you can use the original object as an easy baseline to compare against. We were testing photogrammetry software and image planning, so we were more interested in relative accuracy, but we weren't ever able to create any models that looked as good as the originals.
The structure-from-motion algorithm that most photogrammetry software use works by matching points in the images. If it can identify the same point on the surface in several images, from different angles it can calculate the 3D position. To make that easier, we'd often apply a high frequency power fractal over the surface to give more easily identifiable points to match. That helped with getting better geometry in the end result, but won't help if texture is your end goal.
The hard part of photogrammetry is usually calculating the camera positions and surface geometry, most software computes a texture map as a final step. Since Terragen can export a mesh, and the camera positions and parameters are known, it might be worth checking to see if any of the software will let you import the mesh and camera positions, and jump straight to the last step of computing the texture map.