Sorry I have been abscent

Started by WAS, June 06, 2022, 02:26:40 PM

Previous topic - Next topic

WAS

Sorry, I haven't been around much for more than a comment or two, and haven't shared much. I had gotten COVID, and just got done with 14 day isolation. On top of that, I get done with COVID right in time for my bi-annual allergies from some plant (that blooms every other year or every 3 years). I am just miserable; talking not being able to see from constant mid-sneeze feeling looming, on top of the constant sneezing. I am just miserable.

I've been just letting my computer run Colabs and doing AI art as I don't need to think, or really see much to do anything. Just put int some unique text prompts or init images.

Been also having RAM issues. One of my DIMMs is dead, so anything past 16gb, and the system becomes unstable. Since that's the size of a DIMM, safe to say the secondary has crapped out leaving me with an affective 16gb of RAM, instead of 32gb. 32gb was barely enough to do the vegetation I wanted to do to begin with, so... :P Hopefully I can have this resolved soon. Also got some new equipment that should finally give me at least mid-low end performance with CUDA for Blender. I need CUDA so bad for volumes. They are just ungodly slow on CPU. Adaptive terrains are fast on CPU, actually. At least twice as fast as TG currently. So I'd love to really the "real" blender experience with CPU + GPU.

Hannes

Oh boy, what a pity! I hope you'll get better soon.
And I hope your computer will get better soon as well!! ;) Take your time!!

Dune

That's quite awfull. Sorry to hear that you've been so ill. Covid is underestimated in my opinion, at least here people act as if it's over. But it can still have serious consequences, so I hope you'll get better soon.

cyphyr

Hey Covid sux, hope you get better soon.

Speaking od AI have you managed to have a look at Midjourney yet.

Some of the output I have seen is truly great (some less so of course).  I have applied to the beta but not heard anything back yet.

I was wondering as someone who has swam in the AI seas of creativity if you had any comments on it.
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

WAS

#4
I haven't been able to get into the Midjourney beta, or the DALL-E 2 beta. People have been trying to sell me invites like mad to Midjourney but I ain't biting. I'll wait untill I get a invite or it's released.

I have been using Disco Diffusion on Colab Free.

Here is a few shares I did recently:
https://www.reddit.com/r/DiscoDiffusion/comments/v6nri2/medieval_portraits_disco_diffusion_52/
https://www.reddit.com/r/DiscoDiffusion/comments/v5jed7/space_queen_style_experiment_disco_diffusion_51/
https://www.reddit.com/r/DiscoDiffusion/comments/v6f9v8/ancient_rocky_landscapes_concepts_disco_diffusion/ (Uses Rene as an artist prompt)
https://www.reddit.com/r/DiscoDiffusion/comments/v6f5lc/oil_bubbles_disco_diffusion_v52/
https://www.reddit.com/r/DiscoDiffusion/comments/v5meee/transmutation_of_the_heart_of_the_jungle_disco/
https://www.reddit.com/r/DiscoDiffusion/comments/v5kwen/medieval_castles_in_traditional_watercolour_disco/
https://www.reddit.com/r/DiscoDiffusion/comments/v462hc/celestial_mindseye_beta_disco_diffusion_v52/

Before I got used to Colabs, I used this: https://multimodal.art/mindseye It's a lot easier to set up the Disco Diffusion model, or Latent Diffusion model (better for realism). It runs all the code at once and launches a nice GUI.

The only issue I have with MindsEye is it can't do prompt weights, which are pretty important. For example, if you didn't want bokeh and blur effects all over you'd need a prompt like
["A beautiful beach with rolling waves crashing on to it, trending on Artstation, Highly Detailed, 4k resolution, 8k resolution:5", "blur, bokeh, dof:-4"]This would make your initial prompt have a weight of 5 making it the highest priority in the positive range. Blur, bokeh, dof are negative 4 (total weights can't be a sum of 0), so it's high priority for ignoring keywords/styles.

Weights can be used to really drive a creation in a certain way. For example. Let's say you wanted to make a organic looking demon city. Something like this may give you the desired results, while keeping the undesired out:
["A evil landscape with a foreboding demon city, trending on ArtStation, by Thomas Kinkade, Organic Round Shapes:5", "angular, square, straight:-2", "blur, bokeh, dof:-4"]Or

["A evil landscape with a foreboding demon city, trending on ArtStation, by Thomas Kinkade:5", "Organic Round Shapes, by H.R. Giger:3", "angular, square, straight:-2", "blur, bokeh, dof:-4"]This would priority CLIP searching for what you're after, and try to de-prioritize what you don't want, angular shapes, square shapes, straight lines, whatever you want.

If you want to try straight Disco Diffusion, here is the latest Colab: https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb

cyphyr

The midJourney stuff all seems to have a very high and specific quality, might be because I'm inundated by it on LinkedIn at the moment.

Why the "trending on Artstation" part?
I wonder if you could get it to use your own work as a reference point?

looking at some of the Disco Diffusion creations they seem to have a very painterly quality and suffer from perspective problems a lot
www.richardfraservfx.com
https://www.facebook.com/RichardFraserVFX/
/|\

Ryzen 9 5950X OC@4Ghz, 64Gb (TG4 benchmark 4:13)

WAS

#6
I'm pretty sure Midjourney IS Disco DIffusion, with custom models, and I'm pretty sure all the images are similar because it's a Style Transfer (where they have  pretrained styles to apply). You can get a lot of cohesion out of DD too you just need to get your prompts right and luck of the draw. Midjourney is the same way. People are probably doing 20 iterations plus, and picking the best few. Also people on Midjourney group and Reddit seem to rely heavily on init images than letting AI go crazy. I am just now starting to play with init images

Styles of the images come down to prompts to, like "Oil painting" or "digital art" or "CG", etc, etc, etc. I stick to painting styles cause I like the sense of more traditional art. Also, a lot of people, including me, rarely change the tv_scale which applies denoising to generation, or Noise Reduction to CLIP images which can help smooth out rough painting artefacts in resources found. I should play with those.

"Trending on Artstation" allows CLIP (the engine that steals images regardless of copyright, and mashes them up into shapes) to bring up Art images from ArtStation. Similarly to doing "CGSociety" or "DeviantArt" The Trending is just cause ArtStation has a trending filter that rapidly changes with new and popular images.

I think the only reason this AI stuff isn't in trouble for infringement is the whole "10% fair use" stuff stipulated in lawsuits regarding IP images and art (or just not seeking profit or being 3rd party promotion like fan work composed of characters from a show/film in a not obscene way). The CLIP engine and models reconfigure and build things so uniquely sometimes it would be hard to say it's even "10%" of the original work, if you could ever even discern it to begin with. Using other peoples art for init images where you can clearly see the composition or original work is another topic.

WAS

#7
PS A style transfer is basically forcing CLIP to use the same image (instead of all over the net based on your prompt) to reproduce the supplied source. So it basically rebuilds your result with a predefined image. Or model, but in the case of Wombo Dream, they have private images for their Styles you can pick from. Midjourney may be a model, but by the consistent results across artists (which I don't like, everyone's work feels like it's done by one person in their style), I feel they use images too for their style

Also more random information, to help with styles or result, which I haven't even played with, is prompts by steps. So say your total steps are 250, you could start a new prompt at say 125, half way through, that is a totally different style, to apply to the original. You could build a world in the style I did for the rocky landscapes, and then set a prompt at 125 that is like "H.R. Giger Organic Art", basically using your first 125 steps as a init image for the next prompt. Like prebuiding a world for a new style.