Severe need for documentation

Started by PabloMack, April 18, 2010, 01:25:43 PM

Previous topic - Next topic

timj

Quote from: Oshyan on April 22, 2010, 03:14:06 AM
We're a small company by choice. We've made the conscious decision not to become answerable to investors who - in general - only have a profit motive. That has limited our ability to grow of course, but it's a worthwhile trade-off for us. We are growing organically (1 new employee every 3 years on average so far, I think ;D), and plan to continue doing so. We also charge a lot less for our product than many other companies would, and we offer a free version. These kinds of policies probably wouldn't be very popular with stockholders.
-snipped

- Oshyan
That's fair enough Oshyan. However, that decision also means that development on Terragen is glacially slow. I bought TG2 Deep Animation approx 2 yrs ago because I was led to believe (from text on the website) that there was an SDK in the offing. After all this time there is still no SDK available.
I'd rather see more effort going into speeding up development than into documentation.
- Tim

shadowphile

Frankly I'm surprised a software company can survive with such glacial progress.  One employee every three years?  The landscape (no pun intended :) is always moving in the high-tech world with 3 and 6 month release cycles.
If I were Planetside, I might be worried that at some point competition will arise.  With CGI become more sophisticated and prevalent ever day, it's only a matter of time before the lack of an easy/professional/documented landscape development package creates a demand, then some entity like Blender will decide to tackle special landscape-generation tools.  They will accomplish it in months, probably with just one programmer doing the core routines because the entire user interface API is done and works spiffy.

My other huge and blinding headache has to do with lack of information about node connections.  I'm not a pre-rolled solution kind of guy: I think of what I want to do and how it is built up and how I can use the nodes to build that up.   A frustrating session the other night reminded me of why I can't stay focused for very long with TG: nothing is intuitive.  Even with a bunch of tedious re-renders after every (unintuitive) tweak, I find it appalling that I still can't identify what is in a line between nodes!  The content seems to be a willy-nilly glob that the following input gets what it expects, or converts something else if not.  Standard data typing and identification should be MANDATORY for the inputs and outputs of the nodes, or it's just an organic collection of various functions that happen to make interesting pictures when combined in patterns that others tell us will work.  Tool-tips would be great place to insert this info, so that those who don't like manuals can still learn on the fly.  In fact, that would be necessary, since the contents of the output can change depending on the input and the input will interpret the input-data differently depending on what is available, gasp!

I think part of the issue is that the program is based on a shader language, a specialized construction approach that seems to build in layers, which makes sense at first, but not for general purpose design of landscapes, especially when doing extreme displacements.  I am constantly trying to un-learn my programming and design experience to get out of my box.

And BTW, I'm not one of those for whom trying things out is part of the fun.  That's an experiment, or me playing detective.  I bought TG to build terrains/skies as an artist or designer.  If it was more intuitive or faster (the 3D preview isn't much different than the main render, speed-wise) then I might enjoy exploring more.  Bad settings -> bad (longish) renders -> lose interest.

The only reason I keep struggling with the program is the core routines are awesome!  (and I made an investment :)

Henry Blewer

Planetside concentrates on providing a very stable program. It's one of the strong features of the program. It's possible to crash it, but overall it handles practically any combination of shaders and functions. Very few other programs can claim this.
Terragen 2 is powerful enough to compete with all the other programs of its type. The learning curve is steep; things do make sense in a short time. After a month of use, most can grasp how to expand on the simpler methods of landscape creation.
http://flickr.com/photos/njeneb/
Forget Tuesday; It's just Monday spelled with a T

FrankB

Quote from: shadowphile on April 22, 2010, 05:21:01 PM
I think part of the issue is that the program is based on a shader language, a specialized construction approach that seems to build in layers, which makes sense at first, but not for general purpose design of landscapes, especially when doing extreme displacements.  I am constantly trying to un-learn my programming and design experience to get out of my box.

And BTW, I'm not one of those for whom trying things out is part of the fun.  That's an experiment, or me playing detective.  I bought TG to build terrains/skies as an artist or designer.  If it was more intuitive or faster (the 3D preview isn't much different than the main render, speed-wise) then I might enjoy exploring more.  Bad settings -> bad (longish) renders -> lose interest.

The only reason I keep struggling with the program is the core routines are awesome!  (and I made an investment :)

Hi Shadowpile,

sorry to read you are having so many troubles with TG2, both with understanding how it works and render (speed) satisfaction. From your previous posts in other threads I would reckon you do understand it quite well by now, though. Still you're struggling, as you say, but I think you're just impatient with yourself. Terragen 2 wouldn't be what it is today, with all its capabilities and flexibility, if it were trying to make things simple on the expense of this flexibility. With that flexibility comes complexity, it's inevitable. And for getting that complexity under control, you need to have some patience. Combined with taking the freedom to come and ask questions, instead of piling up frustration, will make that path easier and more enjoyable.

What could help you understand it a little better is to think of the network of a stack, that begins at the top and goes downward into the planet node eventually. Every node on the way done adds something to what you have started with. Just get these things in order and you will not loose control over your network. If you look at it this way, you don't have to worry much about type casting. Inputs will pass on everything from above. If a node has additional inputs, you just have to know what this input expects: color or displacement, or a number. In most cases.

As for render time, here's another brutal truth: you have to have at least a quad core intel, but really better an i7, to not let TG2 slow down your creativity. 1.5 years ago I purchased one of the first i7 systems. BEFORE that time, I loved TG2, but frankly I was doing less and less with it, with my AMD X2. Render times were so unacceptable for a creative mind, that there was a lot of frustration coming up on my part. A new, fast render system changes that situation dramatically and I found myself shelling out new and more complex renders every other day. The truth is you can't have long term fun without fast hardware. Except maybe Njeneb with his otherworldly patience and single core P4, but there are exceptions for everything, aren't there? ;)

Cheers,
Frank

Henry Blewer

Quote from: FrankB on April 22, 2010, 06:10:48 PM


As for render time, here's another brutal truth: you have to have at least a quad core intel, but really better an i7, to not let TG2 slow down your creativity. 1.5 years ago I purchased one of the first i7 systems. BEFORE that time, I loved TG2, but frankly I was doing less and less with it, with my AMD X2. Render times were so unacceptable for a creative mind, that there was a lot of frustration coming up on my part. A new, fast render system changes that situation dramatically and I found myself shelling out new and more complex renders every other day. The truth is you can't have long term fun without fast hardware. Except maybe Njeneb with his otherworldly patience and single core P4, but there are exceptions for everything, aren't there? ;)

Cheers,
Frank

I can speak for the render time frustration. I have an old system, a Pentium 4 HT. It still handles many things well, but it takes a couple of days, or more to do a large render. Because of how more complex node set ups take more calculations, I tend to keep things simple. It is not helping me learn more, but I can do some very nice pictures IMHO.
Also, not every tgd file works out. I have many that I have abandoned in favor of a fresh start. With every set up, I try to improve or use something new. Hence the abandoned setups.
http://flickr.com/photos/njeneb/
Forget Tuesday; It's just Monday spelled with a T

acolyte

Shadowfile makes some very good points.

QuoteFrankly I'm surprised a software company can survive with such glacial progress.  One employee every three years?  The landscape (no pun intended  is always moving in the high-tech world with 3 and 6 month release cycles.
If I were Planetside, I might be worried that at some point competition will arise.  With CGI become more sophisticated and prevalent ever day, it's only a matter of time before the lack of an easy/professional/documented landscape development package creates a demand, then some entity like Blender will decide to tackle special landscape-generation tools.  They will accomplish it in months, probably with just one programmer doing the core routines because the entire user interface API is done and works spiffy.

As much as I love Blender (and I LOVE it), I would hate to think that a program of this magnitude could be in any way shape or form matched for terrain generation capabilities or procedural rendering capabilities; however, Planetside, you need to realize that this could happen. If an open source community gets driven to expand specifically in this way, you might have some new competition on the block that can boast that its not only good, but free.

QuoteAnd BTW, I'm not one of those for whom trying things out is part of the fun.  That's an experiment, or me playing detective.  I bought TG to build terrains/skies as an artist or designer.  If it was more intuitive or faster (the 3D preview isn't much different than the main render, speed-wise) then I might enjoy exploring more.  Bad settings -> bad (longish) renders -> lose interest.

This has been my entire point all along. Documentation is not something that can be thrown around willy nilly. If you have a user base as large as this one and they're expected to pay high dollar / euro / pound for said software, then I'm not saying it has to be easy, but it better be documented well or you will quickly lose trust and faith from your users. Its like buying an expensive camera and upon opening the box finding a quickly scrambled note explaining that better manuals are on the way, but that the following bullet pointed list will have to do for now. Upon unpacking the rest of the box you are left with a complex camera with multiple parts and no way to know how to use this incredible piece of equipment you just invested so much money in.

Not to beat a dead horse, I know the devs have already responded here, but users who buy a piece of software should not be expected to play this kind of guessing game when it comes to using the basic functionality of the software. If I choose, I should be able to never engage in the forums and still have the opportunity to know as much as I want about how the program I bought works to produce what it promised to produce. I deserve at least that much as someone who actually paid for the software and didn't pirate it.

All that said, I know this changes nothing right now, and promises have been made to get working on the docs and get the ball rolling, but we've heard that before and not just about the docs. We've heard the same thing about timely release dates for software updates and new releases. I hope this can be a wake up call to PS to begin to rethink their seemingly firm policy on a select few employees and to start investing some more of our dollars not only in going to bigger and better places with the software, but improving what's already there and at least getting us up to speed to where we should be before we start thinking about trying to compete with other pieces of software in terms of features.

jaf

I guess my concern would be "will the documentation ever catch up to the software?"  

I may be the only user here who is on a dial-up Internet connection, so my struggles with online documentation and searching the forums for answers probably don't mean much to anyone here, but that's my problem.  However, I also use Lightwave a lot, and they provide nice documentation in pdf format that is (as far as I can tell) up-to-date with the current software release.  And Lightwave is a fairly large package and has a very active forum.

Yes, Newtek is a bigger company than PS, but still it's relatively small in the 3D modeling world.  Probably Silo is a better compare with TG, since they are small and have a very slow release record (not sure about the documentation since I haven't kept up with that software.)

I'm not looking for a "how to do something in TG", but how each function works and the "whys and wheres" of the connections.  For example, I start with the TG2 default scene and look at the nodes.  I add a lake and I see "Lake 01" pop up in the Water node.  Fine, but I also see a little output triangle which intuition says "I've got to connect that to something or it won't work."

On the other hand, I see Terrain and Atmosphere blocks feeding "Planet 01", which makes sense.  But Water and Lighting blocks seem "self-contained".  I know they effect the planet, so why isn't there a physical connection?

I think the biggest frustration is that the developers know all these answers, but don't have time to document them.  But they also have to deal with "how things work" as they develop.  Knowing the answers are "there" is probably what causes the frustrations seen in this thread.  But TG2 is still my favorite software, so they are definitely doing a lot of things right, in my opinion.  :)
(04Dec20) Ryzen 1800x, 970 EVO 1TB M.2 SSD, Corsair Vengeance 64GB DDR4 3200 Mem,  EVGA GeForce GTX 1080 Ti FTW3 Graphics 457.51 (04Dec20), Win 10 Pro x64, Terragen Pro 4.5.43 Frontier, BenchMark 0:10:02

PabloMack

#37
Quote from: FrankB on April 22, 2010, 06:10:48 PM
As for render time, here's another brutal truth: you have to have at least a quad core intel, but really better an i7, to not let TG2 slow down your creativity. 1.5 years ago I purchased one of the first i7 systems. BEFORE that time, I loved TG2, but frankly I was doing less and less with it, with my AMD X2. Render times were so unacceptable for a creative mind, that there was a lot of frustration coming up on my part. A new, fast render system changes that situation dramatically and I found myself shelling out new and more complex renders every other day. The truth is you can't have long term fun without fast hardware. Except maybe Njeneb with his otherworldly patience and single core P4, but there are exceptions for everything, aren't there? ;)

Which AMD X2 were you using?  I have an AMD Turion X2 (1.79 GHz /w 1.87 GB RAM) in my current laptop (on which I am writing this text).  Adobe After Effects is somewhat lethargic on this system but my new cruncher (Red Dragon) has an AMD Phenom II 955 (3.2 GHz /w 8GB RAM).  My older workstation has a 2.4 GHz Intel P4 single core (maxed out at 2GB DDR1?) which the Red Dragon is replacing.  I did my own video NLE render benchmark to compare speed between the two systems and the P4 reported that it was only 5% complete when the Red Dragon finished with the render.  I didn't want to wait the estimated 2 hours for the P4 to complete its job so I hit cancel.  Of course, video rendering is more I/O intensive than something like TG2 which is more compute intensive.  I also did my own computation benchmark and each Phenom II core was only about 5% faster than the P4 (after adjusting for the difference in clock speeds).  The lesson I learned was that AMD's hyper-transport really is dramatically faster than Intel's counter-part for I/O.  This is in large part, due to the fact that the dedicated I/O channel Intel processors have to the graphics card (including the i7) leaves little bandwidth for other things.  But I do a lot of video editing so this works for me.  But I will have to consider the i7 if I decide to add render farm nodes some day.  

FrankB

actually, I don't exactly remember which AMD X2 it was, but it was one of the first that came out. So very, very old, and f**** slow. Well, back then it was cool, TG Classic was quick, but when I got my hands on TG2 alpha in (2005?) it was a nightmare. TG2 is an application for HW of the year 2009 and beyond. Like modern 3D games are made for today's computers, you have to have a new and powerful modern system for TG2. I mean, we all know it "runs" on older HW, but to stay with the anology: would you want to play a game with 5 fps? Probably not :)
As for what's the right choice of CPU for TG2, the benchmark site tells you everything in plain truth. An i7 is what you need.

Cheers,
Frank

shadowphile

Just for the record: programming languages, either graphical or textual, exist to allow the user to roll their needs, and are probably the best examples of powerful and flexible = complexity.  I have no problem with complexity, I live it in fact.
But you won't find one of those that doesn't have very precise documentation on each and every function and what the inputs and outputs expect, and how the data itself flows.

For example, in another thread I started a while back, it was clearly explained to me that the blend-by-shader input only uses color, and yet when I turn on displacement in the shader feeding the blend-by-shader input, the whole image disappears.
Who is in charge here, the input or data connected to the input?  Wouldn't an input that only uses color ignore everything else?  Without having read the 'valleys' thread (very short), I eventually figured out I had to turn off displacement, EVEN THOUGH I wanted to use the displacement data from the same blending shader.  Sometimes one wants both displacement AND color data to be synchronized from the same source.
Does that make sense?  I've attached a simple example clip.

Lots of people can tell me how to make something work, but I really need somebody who can tell me why something (that seems like it should) DOESN'T work.
(even in college I noticed a split amongst students: those who learned by copying and those who learned conceptually.   I was often frustrated by TAs or profs that couldn't or wouldn't explain WHY MY THINKING WAS BAD.  Without that, I'm just doomed to keep applying the same bad thought process unless I get lucky.)


PabloMack

#40
Quote from: shadowphile on April 23, 2010, 08:06:26 PM
(even in college I noticed a split amongst students: those who learned by copying and those who learned conceptually.  

Let me guess, the first group dropped out and became artists and the second group stayed in science or went into engineering.  There are many people, even professors in technical fields, who don't understand what they are teaching.  It is very frustrating to students who want to know how things work.  

If you asked me, I would say that a colour input or output should be broken into its three or four channels (HSL , HSLA, RGB or RGBA).  You must follow where those specific signals go before you can really understand what is happening.  And then you need to know what the node's "black box" is doing with it.  And it must also be understood what the active ranges are for all channels ($00 thru $FF vs. 0.0 thru 1.0 etc.).  In a really flexible node system you could hook colour channel so and so to displacement channel thus and thus and it should work.  And if you get strange behaviour, you should be able to trace the signals and do the calculations by hand to find out where you went wrong.  Then you could insert a node that re-scales the signal to make it have the range you want if that is what it takes.  As a programmer, I would probably prefer a node system that uses a list of assignment statements rather than a messy graphical node system as is standard practice.  If you try to assign the same input with conflicting expressions, the software would check and flag the confilct as an error.  This is how it should be.  But if you hook a 3-channel output to a 1-channel input in a graphical node system, what it going on?  Does the input just accept the luminance or what?  And if it is not permitted, why does the node editor let me do it just to give me an undefined result?  This is how it should not be.  So ideally, colour inputs and outputs should have four channels, not one, and even a choice of RGBA vs. HSLA.  You should be able to hook the Blue output to the Red input and get any effect you want if you are willing to live with the visual consequences.  There would be more lines running every which way in a graphical node system, but at least you can see what you are doing.  You could provide a bus output that will hook all four channels to the bus input of another node.  But then you are still left with the question "How does the software handle mis-matched channels (or does it)?"  

Tangled-Universe

Quote from: shadowphile on April 23, 2010, 08:06:26 PM
Does that make sense?  I've attached a simple example clip.

The reason why this file doesn't work is because your "mountains distro" powerfractal appears in the node-network but its settings are not described/stored in the .tgd-file.
The "mountains displ" is described/stored in the .tgd-file and therefore works as expected.

How did you create this file? Did you edit it manually in a text-editor?

PabloMack

#42
After thinking about it some more, I must rescind some of what I wrote two posts ago.  I think that many of the lines connecting nodes represent light (correct me if I'm wrong) and some of the nodes represent objects acting on the light.  In such case, there is no Alpha channel to light because Alpha is a property of an object (or part of the object's surface) that acts on the light.  And for realism, you would only connect a Blue light output into a Blue light input because light doesn't suddenly become another color unless it interacts with something that makes it change (i.e. an object/node).  So I now think that node interconnects may never need to be broken up into channels.  After all, we all want realism in our renders, correct?  If someone really wanted to alter a light channel, he/she should have some sort of math node to pass the signal thru and do the adjustment inside the node.  So if someones connects a light output to an inappropriate input on another node, the software should probably report an error and disallow the attempted change to the node network.  

Tangled-Universe

Quote from: PabloMack on April 24, 2010, 08:45:18 AM
After thinking about it some more, I must rescind some of what I wrote two posts ago.  I think that many of the lines connecting nodes represent light (correct me if I'm wrong) and some of the nodes represent objects acting on the light.  In such case, there is no Alpha channel to light because Alpha is a property of an object (or part of the object's surface) that acts on the light.  And for realism, you would only connect a Blue light output into a Blue light input because light doesn't suddenly become another color unless it interacts with something that makes it change (i.e. an object/node).  So I now think that node interconnects may never need to be broken up into channels.  After all, we all want realism in our renders, correct?  If someone really wanted to alter a light channel, he/she should have some sort of math node to pass the signal thru and do the adjustment inside the node.  So if someones connects a light output to an inappropriate input on another node, the software should probably report an error an disallow the attempted change to the node network. 

Well, actually your previous thoughts were close to how it works I think.
The reason I think not light is represented is because the light has no output to connect to the input of a surfacelayer/powerfractal/whatever

I'll basically try to tell my understanding how TG2 works, hold your breath ;) :

Terrain -> Shaders -> Planet
+
Atmosphere -> Clouds -> Planet
= Planet + (water +) Lighting
(water is an "object" and not a shader, that's why it isn't connect to the planet)

So:

In the terrain tab you set up the fractal flavours with which you want to create your landscape.
The fractal is generated inside the node and can either generate displacement(scalars) or colours. Depending on how you configure it. Here's roughly how:

Displacement: if you leave colours unchecked the internal fractal will not create colour, which you for example could use for texturing (in the shader tab).
the fractal is generated using the scale-definitions and the displacement parameters in the displacement tab. The colour settings in the colour tab do not affect the profile of the fractal. Only the scales and displacement settings do. Just try for yourself: start new file, and add a powefractal terrain. Go to the terrain fractal and adjust any of the colour-parameters. No effect.

If you want to have the colour-settings affect the displacements, then feed the powerfractal-output to the shader-input of a displacement shader. This shader allows for colour-based adjustments inside the powerfractal. You can control the displacement with the displacement shader.
The powerfractal probably generates scalars to build up the fractal's displacement and the displacement-shader converts the scalars to colours and that's why you can do colour-based adjustments of the powerfractal.

So basically the node which is being fed determines what happens with the data.
Therefore the breakup-shader, blendshader and colour-input of a surface-layer are programmed to use colour-data.
The displacement-input of a surface layer works similar to a displacement-shader.
The layer-child accepts both data-inputs, since you wouldn't want to discriminate between those two for the logica reason that you can do that with the shader you choose to use as a child.

When I use these principles it is also fairly easy to understand how you can use powerfractals for colour.

To go a bit further:

When you have generated the displacements in the terrain tab the you compute terrain. Why?

- Compute terrain computes the normals and texture coordinates of the geometry generated by the shaders above it.
The resolution of the computation is determined by the patch size.
Compute terrain allows for restriction to slopes (needs computed normals to determine vectors) and height (texture coordinates).
- Compute normal computes the normals of the displacements, so as mentioned this only allows for slope-restriction.
This can be useful if you want to do vertical/lateral displacements, since displacing in these directions demand a "known" normal to function correctly.
- Tex coords from XYZ computes the texture coordoinates/height-information, so allows for height-restriction.

The terrain and shader groups are a guideline for building the scene, that's why you basically generate and compute terrain in the terrain tab and use the computed normals + texture coordinates to texture the terrain in the shaders tab.

Once the terrain and textures are produced it is being fed into the planet-shader.
The planet-shader also has an atmosphere shader for the atmosphere and clouds.
These two are combined with the lighting-settings and generate the final outcome of the scene.

Sorry for the bit messy "quick" (took me half an hour though lol) write-down of how I see TG2 works and so far it works fine and logical to me.

Hope some of this helps.

Cheers,
Martin

Kadri

#44

See , it is so easy !












Sorry Martin , couldn't resist  ;D Thanks for the explanation .