Bringing AI Diffusion to Terragen 4

Started by WAS, November 02, 2023, 03:50:03 PM

Previous topic - Next topic

WAS

Would you all just hate me if made a Stable Diffusion bridge with Terragen 4 for turning low-poly quick renders into impressive high-resolution scenes? :P Even thinking of masking and compositing objects within scenes via layer elements to retain those subjects in the diffusion. Just brain storming, but if I did this, it means that hundreds... to millions... of people would be storming over here for Terragen 4 freeware and asking all them questions.

Dune


Matt

ML image generation is massive, of course. However, do you think Terragen 4, as it is now, is a good tool for constructing low-poly control images? I'm not sure. Future versions of Terragen should be better in that regard.
Just because milk is white doesn't mean that clouds are made of milk.

billhd

WAS - Respect, not hate.  I noticed Blender has an SD add-on.  I played a little with image-to-image, one can squeeze some better realism out of so-so renders - really with some compositing and multiple passes a lot is possible.  Also it's a way to add plants without the models.  The attached is from a similar layout TG render passed thru SD with text prompt for the plants, and the rocks were text-altered from very rough to more slab-like.  I think TG + AI is a goer in multiple ways.  I like image-to-image because of the potential for control.  Bill

WAS

#4
Quote from: Matt on November 03, 2023, 02:42:04 PMML image generation is massive, of course. However, do you think Terragen 4, as it is now, is a good tool for constructing low-poly control images? I'm not sure. Future versions of Terragen should be better in that regard.
The benefit I see is that Terragen offers a very straight forward way to setup a simple scene, navigate etc, and just render out to the diffusion backend, where Blender, which already has bridges, you are starting at scratch or an advanced using building workspace for your Blender with all that magic macro sauce. And while there is no GPU acceleration for the terrain generation, with higher denoise, Stable Diffusion can work past low quality jaggy-ness.

Quote from: billhd on November 03, 2023, 09:02:04 PMWAS - Respect, not hate.  I noticed Blender has an SD add-on.  I played a little with image-to-image, one can squeeze some better realism out of so-so renders - really with some compositing and multiple passes a lot is possible.  Also it's a way to add plants without the models.  The attached is from a similar layout TG render passed thru SD with text prompt for the plants, and the rocks were text-altered from very rough to more slab-like.  I think TG + AI is a goer in multiple ways.  I like image-to-image because of the potential for control.  Bill
Plants is a good Pro, for sure. That's always been a struggle for me, 50 bucks thrown everywhere for relatively small packs of plants is not something i want to get in the habit of. Lol

I'm still brainstorming but I can see how it could work. I was thinking this may be something to drive more from the Stable Diffusion side though. Where we have presets of terrain types, and then we pass seeds from the UI to the back-end and via Python setup new terrain seeds with the tgd XML. This would allow novice, immediate usage without any know-how of TG4. But ofc allowing more advanced setups like selecting a TGD to start from rather then a preset.

Someone on ComfyUI element already made a internal system for creating seamless textures in ComfyUI: https://github.com/melMass/comfy_mtb
272970506-9db516b5-45d2-4389-b904-b3a94660f24c.png

With right high-resolution processes + upscale models, you can get some pretty impressive textures in just a couple minutes.