Poll: Suggestion for improving Node Editor- Connector Shapes

Started by PabloMack, May 05, 2010, 11:08:01 AM

Previous topic - Next topic

Should we have different shaped connectors on Nodes in Node Editor?

Keep All Triangles
Shape Reflects Signal Class

PabloMack

Hi jo,

Thank you for piping in.  I was starting to feel lonely with no TG staff.  I have a question for you (or several related ones).  With displacement, we are dealing with a scalar.  Correct?  But with colour, we are talking about a 3 or 4-dimensional vector.  So if colour vector outputs are fed into scalar inputs, is the luminance extracted from the source vector to obtain the scalar that will be used to drive the input?  And if colour contains Alpha, what then?  If you are extracting luminance for this purpose, then I presume hue and saturation are thrown away (along with Alpha?).  And if you drive a colour (vector) input with a scalar output then you will either have floating (or defaulting) hue and saturation or you just drive all three RGB signals with the same scalar so you are using a monochrome signal.  Of course there are the active dynamic range compatibility issues (legal or typical end points) to each scale which can be adjusted in math nodes in systems that have them.  Iin my experience, luminance is typically limited to a range of 0 to 1 while displacement may go way beyond 1. 

jaf

Of course that gets us back to the root of the problem.  If the documentation was up-to-date and thorough, you wouldn't have to ask many of the questions and wouldn't need the on-screen aids.

I have experience as an Electrical and Software Engineer and some limited time as a Human Factors Engineer, so I probably think more like you, PabloMack.  But it was all in the past, on older, larger, equipment, like P-3 and F-15 fight simulators. 

I would rather pick up a book to get answers.  But if that's not available, I would rather read a pdf document than do Internet/forum searches.  Of course no documentation will answer everything and that's where the power of this forum comes into play.  And new ideas.

I especially agree that we don't want TG2 to lose it's flexibility just to make it easier for beginners to use.  We don't want it to lose performance due to "clutter" or look and feel like a MS Office application.  I think I can say "we" because I haven't seen much evidence to the contrary.

However, there's lots of things that could enhance the new user's experience that wouldn't impact performance/flexibility.  I think it's good to throw it around here with discussion and not simply discount it with "I like it the way it is" without thinking of the user's as a whole. 

The really expert users that post here (not me, by any means) are a minority of the entire TG2 community, but get the most attention, as they should, because they are experts.  But that means the consensus of most forum topics is usually in favor of the experts.

Anyway, fun stuff!
(04Dec20) Ryzen 1800x, 970 EVO 1TB M.2 SSD, Corsair Vengeance 64GB DDR4 3200 Mem,  EVGA GeForce GTX 1080 Ti FTW3 Graphics 457.51 (04Dec20), Win 10 Pro x64, Terragen Pro 4.5.43 Frontier, BenchMark 0:10:02

mogn

Quote from: PabloMack on May 05, 2010, 10:04:47 PM
Hi jo,

Thank you for piping in.  I was starting to feel lonely with no TG staff.  I have a question for you (or several related ones).  With displacement, we are dealing with a scalar.  Correct?  But with colour, we are talking about a 3 or 4-dimensional vector.  So if colour vector outputs are fed into scalar inputs, is the luminance extracted from the source vector to obtain the scalar that will be used to drive the input?  And if colour contains Alpha, what then?  If you are extracting luminance for this purpose, then I presume hue and saturation are thrown away (along with Alpha?).  And if you drive a colour (vector) input with a scalar output then you will either have floating (or defaulting) hue and saturation or you just drive all three RGB signals with the same scalar so you are using a monochrome signal.  Of course there are the active dynamic range compatibility issues (legal or typical end points) to each scale which can be adjusted in math nodes in systems that have them.  Iin my experience, luminance is typically limited to a range of 0 to 1 while displacement may go way beyond 1. 


There are no difference between a vector and a colour, they are both 3 components data types and the components can take any value.
I you connect the output of a colour node to a  scalar input, you get the scalar: sqrt(red^2 + green^2 + blue^2)
I you apply the "colour to luminance scalar" to a scalar you get weight(red)*scalar + weight(green)*scalar + weight(blue)*scalar

I vote for no change, except it would be nice nodes with no input could suppress the input triangle

Matt

I think the main reason there isn't a clear flow of strictly typed data in Terragen is because of the early decision to do much of the work by chaining together shaders which can perform arbitrary modifications on a wide band of data. This is the general case with the shaders, which are the red coloured nodes.

Different shaders do different things and work in different ways. However, there are some basic rules.

Most shaders (the nodes coloured red) are designed to manipulate the current shading state. The shading state contains information about the current point being shaded, including (but not limited to) position (that includes displacement), undisplaced position, texture coordinates, various surface normals. Most shaders execute the shader that's connected to their main "input" input, then perform their own modification of the state. For this reason, in general there is no particular type of data that an input requires. You can therefore think of the main "input" input as being an entire description of a point on a surface or in a volume.

I designed the main flow of shaders to work this way so that each shader can do more than one thing to a surface, not just generate some value.

Other inputs may expect particular types of object, such as a camera, an object (in some cases a specific type of object e.g. a planet), a shader or function. I don't think we can represent all the types we would want to with just a few simple shapes, but I think a clear indication of the expected type would be very useful and is an unfortunate omission in the current system. There are some basic categorisations (e.g. shader/function, object) which we could represent, but they would be coarse categorisations.

The blue function nodes are designed a little differently. With these I wanted there to be a more conventional flow of data consisting of a single value (whether it has 1 or more components) and for each of the nodes to perform only one low level function that we could easily document, and for parameters to be implemented as connections wherever possible, rather than constant values in a parameter window. For these nodes, there are only 3 possible types of data. Scalar, Colour (3 scalars) and Vector (3 scalars). They are not strictly typed, and conversions between these types will happen automatically (according to specific rules we have documented) if a function expects a different type of data from what it is given. By convention, the type of data produced by a function should be the last word in its name, although there are a few exceptions where it was awkward to adhere strictly to this convention.

Oshyan, do we still have that page where Jo and I documented the automatic type conversions done by functions?

We would like to keep the blue function nodes as a haven of transparent behaviour and avoid most of the "black box" issues and confusion surrounding the rest of the shaders, even though for many users they can seem overwhelming. We need to make sure the documentation is finished and online.

Many of the red shader nodes expect functions or "colour shaders". Any shader that modified a surface's diffuse colour can be used as a colour shader, and will also convert automatically to a colour function when connected to a blue function node. Any function node will be converted to a colour shader when connected to a shader that requires a colour shader.

It tends to be the older shaders that are inconsistent about whether they label an input a "shader" or a "function". We should clean some of this up. Generally speaking, though, any shader could generate colour that might be useful as a function.

There are only a few shaders that generate displacement from a function (or colour shader), and if I recall correctly in every case these shaders have an input labeled "displacement function" which expects a scalar function. Colour data gets converted by taking the luminance, and vector data gets converted by taking the magnitude.

I know this only goes part way to answering your questions.

Matt
Just because milk is white doesn't mean that clouds are made of milk.

Oshyan

Yes, we still have the Type Conversion info. It was taken offline incorrectly a bit ago when I was transferring docs to other systems. It will be restored along with some other stuff soon. Apologies for the mistake.

- Oshyan

Dune

I vote for no change. Please, leave it as it is, no more visual distractions.

---Dune

PabloMack

I have read everyone's comments and I appreciate their input.  They certainly reflect their author's ways of thinking and work flows.  Special thanks go to the PS staff for their insites and assistance.