Whats the hold up with PTEX? In general I mean.

Started by TheBadger, September 05, 2013, 01:34:24 AM

Previous topic - Next topic

TheBadger

Hi,

Just curious if anyone has any ideas why PTEX has not become the industry norm over UV mapping? Its open source and everyone hates doing UVs, so whats the hold up?

Are there some problems I have not heard about? Some reason UVs are better? Why is the industry responding so slowly (it feels) to improvement?

Thoughts, ideas?

:)
It has been eaten.

reck

Badger, for the ignorant (me) could you please explain what PTEX is? From what you've written it sounds like it's a replacement for UV mapping? I'm sure i've heard that Blender is going to incorporate it but at the moment I don't really know what it is.

TheBadger

#2
QuotePtex is a texture mapping system developed by Walt Disney Animation Studios for production-quality rendering:
No UV assignment is required! Ptex applies a separate texture to each face of a subdivision or polygon mesh.
The Ptex file format can efficiently store hundreds of thousands of texture images in a single file.
The Ptex API provides cached file I/O and high-quality filtering - everything that is needed to easily add Ptex support to a production-quality renderer or texture authoring application.
http://ptex.us

http://ptex.us/overview.html

http://www.youtube.com/watch?v=etpX2BNnrxo


I just spent a day and a half UV mapping stuff. So Im a little pissed  >:( What a waste of time!
It has been eaten.

PabloMack

#3
I can understand why it is not more popular than it is. Remember, Badger, when I said I wondered what data structure(s) were used to record non-planar images? Well, this is a similar problem. UV-mapped images are easy to use because they can easily be viewed in image viewers, processed in PhotoShop and manipulated every way that flat images are used. A PTEX collection of images won't make any sense to be viewed any way but on the surface of a 3D model and few applications know what to do with them. While this method is very efficient, it is more compute intensive and I would venture to guess that it would be more difficult to implement with today's GPUs for shading objects using this method in real time. I am sure the data structures and access methods are more complex than what is used in the UV system. GPU shading is essential in the real time previsualization in our 3D applications and they may not be able to handle PTEX which involves mapping images to polygons rather than vertices to rectangular images as most real time shading does. It might require some improvements to OpenGL and DirectX to make them able to use these data structures.

On the other hand, even though there most likely is more overhead (indexing and such) in PTEX it probably doesn't have the wasted image space that is inherent in UV mapping. This wasted space is equivalent to the left over material after patterns are cut out when sewing your own garment from a pattern kit. PTEX seems like it could support a uniform resolution over the whole surface of an object. With UV mapping, waste and uniformity in coverage are tradeoffs with arbitrary 3D geometry.

TheBadger

#4
Hi Pablo

There are a lot of things in your post that are speculative. For the sake of conversation I would just point a few things out.

QuoteUV-mapped images are easy to use because they can easily be viewed in image viewers, processed in PhotoShop and manipulated every way that flat images are used. A PTEX collection of images won't make any sense to be viewed any way but on the surface of a 3D model and few applications know what to do with them.

This is actually one of my complaints.
Why are you trying to view 3D in 2D software? It makes no sense to me. In every single way, painting and texturing a 3D object in a program like MARI or Mudbox (among others) is superior to photoshop. I rarely use Photoshop anymore for 3D, except for doing work on a finished 2d Render image. (And for for creating texture maps to paint with in a 3D app.)

Quoteand few applications know what to do with them
Well, not as few as last year, but yes. Thus my OP.
If I stayed in an autodesk workflow I would not have to worry, Or if I used some of the other major 3D apps. The problem is that I have to use a number of programs including TG, which as you say, are not there yet.

QuoteI can understand why it is not more popular than it is.
It is extremely popular among those who are using it. And among those who use software that makes use of it. Just look on- line.

QuoteWhile this method is very efficient, it is more compute intensive and I would venture to guess that it would be more difficult to implement with today's GPUs for shading objects using this method in real time. I am sure the data structures and access methods are more complex than what is used in the UV system. GPU shading is essential in the real time previsualization in our 3D applications and they may not be able to handle PTEX which involves mapping images to polygons rather than vertices to rectangular images as most real time shading does. It might require some improvements to OpenGL and DirectX to make them able to use these data structures.

I dont know about all the stuff you brought up here. But I can tell you my Desktop is from late 2009, and I haven't had any problem using PTEX in the few experiments I have done. I know that when using Mudbox, GPU is very important and I recently upgraded. But that just goes back to my earlier point... If 3D work requires hardware/software created for 3D, why are people trying to do 3D work with Hard/soft created for 2D work?

If at any point you were referring to Game making, I cant comment on that. I don't know anything about making video games other than that, everything must be optimized to have a smaller foot print. So for that PTEX may or may not be good. But for Film TV and Print, I think there is nothing better.

Whatever issues arise over system resources related to PTEX are made up for by the hours, days and weeks, saved by not having to UV map... In terms of labor. And thats not counting the work that must be doe for UVs after they are created created...

Recently you and I had a conversation about animation. That was about MOCAP. But if you know about animation than you know about rigging and weighting. Now I am Speculating a little, but I propose that the process of Rigging and weighting in PTEX is also superior to the process Of the same with UVs.
It is something to look into.

Quotein PTEX it probably doesn't have the wasted image space that is inherent in UV mapping.

As far as I have seen there is no waste whatsoever. The UV tile is filled corner to corner with the geo quads. And the resolution (per object) is whatever you want it to be.
If you set a resolution equal to 4000*4000, it will be a heavier lift than a 1000*1000 resolution, so it is also with UV's. Except by now you have saved countless hours by not having had to make UVs.

QuotePTEX seems like it could support a uniform resolution over the whole surface of an object.

There is nothing seemingly about it. There is no waste, no stretching, no constraining, no seams, no time wasted making UVs, no nothing that comes with UVs.

Just think of the Vector Displacement maps for terragen thread. A lot of the conversation there was about UV issues (mostly stretching). I am very curious about how a vector would look in TG if it was PTX based and not UV... I think it would be quite extraordinary. I think it would look perfect.
But now Im just speculating again too... It is fun though :D
It has been eaten.

PabloMack

#5
Badger, I didn't know anything about PTEX before this thread so I thank you for helping me to educate myself on this. I have a computer science background (and career for that matter) so some of it may mean more to me than to you. On the other hand, you probably have more hands-on experience with PTEX-enabled tools (I don't have any).

I am sure that some of what I wrote is speculative. But I was once looking into using the GPU myself for programming so I researched it. I saw a lot of people complaining that the hardware and library architecture were so rigidly constructed for 3D graphics (and for a finite number of techniques) that a lot of people saw limited potential for its general-purpose use. That was the motivation behind developing GPGPU and that is what OpenCL is about (I have been developing a parallel programming language that is much easier to use than OpenCL or CUDA). The GPU architecture's rigidity doesn't just act as an impediment to using the system for non-graphical use but also for novel graphical use. The Disney site does state that the PTEX API is written in C++ but that doesn't say much about graphical hardware, the support for which is hidden inside the libraries.

You mentioned about "making games". Most of the 3D graphical software you use almost certainly uses the GPU's real time texturing when you rotate your models to do modeling, painting, rigging etc. in the same way that images are rendered in games in real time. In effect, your 3D modeling and animation software is a 3D game in this sense or the process would be far to slow without using the GPU if you want your previsualization renderer to give you a reasonable approximation of what you expect to see in the final render.

Also, the waste I am talking about in internal algorithms and associated data structures wouldn't be directly visible to the end user of software. Mapping a series of rectangular images to a series of arbitrary polygons certainly has some waste involved. According to what I have read, the images in PTEX are indeed rectangular so they will only fit neatly onto quads (with some stretching to fit them). There is a special case for mapping rectangular images onto triangles. It appears that polys with more than four vertices may not be supported or are handled with some waste involved (I don't know).

As for the stretching in UV maps, the amount of stretching is dependent on how you do the UV map. If you were to break all of the polys appart and lay them on the image like a mosaic without altering their proportions (assuming they are all co-planar) then you would have no stretching at all with UV mapping. But then looking directly at the flat image you wouldn't be able to make any sense from it as it would seem to be all torn appart (because it is!). It would only make sense if you looked at your 3D model. But then that's what you have with PTEX is it not? And you do have some stretching with PTEX but it is not as bad as with many UV maps because the raster starts over and is oriented for every polygon. With non-rectangular polys (that are quads), stretching has to happen with PTEX, but since you don't look at the flat image or work on it, you don't notice that stretching is going on. If I paint my UV-mapped model in Modo, the image is stretched when I paint it and it is stretched in the same why when I render. The same thing happens with PTEX. But when I paint the unstretched flat UV-mapped image in PhotoShop then see it surfaced onto my 3D model, then I notice the stretching because I am looking at both the before and after stretching. With PTEX, you really never see the flat unstretched images so you don't notice it. Said another way, the Texels (as they are called in the PTEX literature) are not uniform when the polys they map to are not rectangular (triangles are a special case).

Years ago I actually thought of a mapping technique where the images are not rectangular but have the shapes of the polygons themselves. The waste involved here would at most be a fraction of a pixel on each scan line. Three of the most spread out vertices would define the plane onto which the projection would be done. I have no idea wheather it could be done using OpenGL or not (probably not). The libraries are almost certainly hard-coded to store and manipulate rectangular images. But the images could be unpacked after being read from the file and then temporary images could be created using the polygon shaped images in the file. The packing and unpacking in C/C++ code might be a bottleneck. Here GPGPU might be able to spead it up.

I know someone at NASA that does many of their 3D animations. He doesn't use any of the 3D packages you and I use but he instead writes all his code in C/C++ and makes calls to OpenGL directly. Learning all of the math and other things he has to know is a life-long endeavor for him. It is far easier to think up ideas than to implement them as Thomas Eidison is known to have said: "Invention is 2% inspiration and 98% perspiration." I have written a lot of code in my life but getting into the kinds of things David does is just too much. Writing the compiler I am writing may even be too much for me. But the compiler truely has much more potential because it has much wider application.