Non-CUDA Photogrammetry Software

Started by WAS, April 01, 2021, 01:44:06 PM

Previous topic - Next topic

WAS

So I tried Zephyr and VisualSFM and both are CUDA based, so they won't use my AMD cards. Are there any photogrammetry software that isn't ridiculous and needs Nvidia?

WAS

Also apparently not many are actually free free. For example Zephyr Free has a 50 image limit. I don't know if that's really low, but it sounds like it was. I'd think you'd want at least half of 360 as a low end.

Kadri

#2

Have you seen these links Jordan?
https://www.reddit.com/r/photogrammetry/comments/eox9nr/what_software_for_photogrammetry_can_be_used/

I especially bought a Nvidia card because of these kind of problems. I don't like that i had to do this.
I am not a fan of  Nvidia or AMD.
Nvidia cards are usually faster (the expensive ones especially).
But i preferred in the past AMD cards because from a price performance aspect
and their general attitude for using more open source-free software.

I had a look at the preferences in 3DF Zephyr Free. You can disable Cuda. But it will be slower of course.
You could try Agisoft in demo mode.

3DF Zephyr Free does have a 500 Mb file limit too probably (it is greyed out).
But from the look of that low 1 Mb file that would not a bad limitation in itself.

WAS

Weird. Blender uses my AMD GPU fine.

And I wouldn't say nvidia is faster. You pay for what you get, and at that right AMD is far more effective at performance per dollar. Corporate hold is really the only issue.

Contracts with CUDA technology, and specific qualifications are the real issue. For example, if a big company just adds OpenCL and opens up for any GPU, why would Nvidia support them? Their workstations and workstation GPUs sales would be severely hit. Suddenly a 4gb budget AMD card is running enterprise level stuff which would be handled by CUDA, and of course, rep their workstation GPUs. This is also one of the reasons Blender probably gets less exposure in the commercial world. They don't care about the logistics of corporate rivalry. 

Guess Ill try that first one mentioned though weird it was on no "free lists" when its open source. Should be at the top lpl

PabloMack

#4
I bought a license one time for the Agisoft photogrammetry package and it was CPU-based. I wouldn't think that photogrammetry would benefit a lot from a GPU as does rendering since terrain is pretty stationary. You do it once and you have the point cloud and/or mesh result and you're done with that step. However, if you need some AI to actively follow and/or ignore moving objects in the video then I could see how a GPU could help a lot. Anyway, I did use the software somewhat successfully and it wasn't very expensive. You can find them at https://www.agisoft.com/

I used this software to create a height map of a plateau in Waterberg Plateau Park in Namibia (I've actually been there) using screen shots I took from Google Earth. I then used the height map to create the mesa in this flyover scene done in Terragen. https://www.youtube.com/watch?v=VEY6dPfmwrk

WAS

It benefits too much, actually. Speed differences are insane because GPUs are meant to handle point stuff, like geometry, and also handle rasterization (image handling for program to get geometry data) waay faster then a CPU. Why Photoshop is GPU accelerated.

CPU architecture is way to broad and inefficient for any one of these tasks alone compared to hardware meant to deal with it. Also why CPU rendering is pretty antiquated alone these days in general. 

Its also why CPUs are becoming specialized like. Macs new Arms. Designed to handle different things effectively, but not as broad as like a Intel or AMD so for tasks which need linear cores, those machines will suffer. Like with TG where it only has a couple fast cores.

PabloMack

Come to think of it, crunching the pictures to point clouds and then to meshes could take quite a while. Since the software is analyzing a bunch of pictures, I could see how a GPU could speed it up dramatically.

mhall

Wow ... didn't realize this was several weeks old. Anyway, I'll leave my comment ...

Quote from: WAS on April 02, 2021, 02:01:01 PMWeird. Blender uses my AMD GPU fine.
But it's been a big time pain in the a** for them to maintain.


They just announced a re-working of Cycles. It's 10 years old and they are ground-up updating a lot of it to be positioned for growth over the next 10 years.

Two months into their prototyping and they're already seeing 7x performance increases in some cases (most complex scenes are seeing great improvements - simple scenes, not so much).

However, one thing they are doing is pulling OpenCL support and support for Intel graphics hardware. OpenCL is considered a stagnant/slow moving standard and too difficult to work with.

https://code.blender.org/2021/04/cycles-x/

From the Deprecation section of the blog post:


QuoteOpenCL rendering kernels. The combination of the limited Cycles split kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We can only make the kinds of bigger changes we are working on now by starting from a clean slate. We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs (such as CYCL, HIP, Metal, ...). This will not necessarily be ready for the first release, the implementation needs to reach a higher quality bar than what is there now. Long term, supporting all major GPU hardware vendors remains an important goal, and we think that with this new architecture we'll be able to better performance and something stability. It is just a matter of time until more GPUs are supported in Cycles X again.


Interview/presentation with the two lead developers Brecht and Sergey:
https://www.youtube.com/watch?v=lv1MoTdQjL4

With that in mind, I can see why more companies never really go on board with it.

I'm not arguing about superiority of any particular card manufacturer, simply pointing out that perhaps the reason there isn't more broad support for OpenCL in applications is that the progress of the standard has been rough.

WAS

#8
That sucks. They're essentially committing suicide. I won't be using Blender any more then. Half of their used GPUs are AMD, while AMD is slower alone, it literally made up half their user base, and for controlled CPU rendering and hybrid rendering AMD CPU/GPU combos were on top.

It's really OK though because they lied about their 2.90 release and they didn't actually "overhaul" the GUI or make it less MACRO dependent making it still a sh*t-show for anyone with dyslexia. Like a macro two key confirmation is needed to apply any modifiers. They still can't stick an apply on the modifier tab next to the selected modifier lol

Honestly their crap with APIs is silly. They will use OptiX but then wonder how to use AMD? Lmfao 🤣 DirectX then, which has Optix and AMD API calls as a wrapper for both for cross compatible software and gaming. Heck most "Nvidia" advertising for RTX is over cause now AMD has support and building for one GPU is shady and bias. They're all using DirectX wrappers for AMD or OptiX..

mhall

Quote from: WAS on April 29, 2021, 12:15:37 PMThat sucks. They're essentially committing suicide. I won't be using Blender any more then. Half of their used GPUs are AMD, while AMD is slower alone, it literally made up half their user base, and for controlled CPU rendering and hybrid rendering AMD CPU/GPU combos were on top.
Your other reasons notwithstanding, I think that might be going a bit far. :)


They're not planning on dropping support for AMD or Intel permanently - just the initial release of the revision of Cycles. They're being funded by AMD and Intel and work directly with more than one AMD engineer. They want to find a way to make it work, just not with the mega-kernel OpenCL model they've been forced to use so far.

My point was simply that it really sounds like OpenCL is a PITA to work with and that is likely why so many pieces of software (like this photogrammetry one you were discussing) have have gone the CUDA only route.

I have both AMD and Nvidia cards, so I'd love to see all software easily accelerated by either.

WAS

Quote from: mhall on April 30, 2021, 10:10:23 PMMy point was simply that it really sounds like OpenCL is a PITA to work with and that is likely why so many pieces of software (like this photogrammetry one you were discussing) have have gone the CUDA only route.


I dunno, I think it's really just performance and them going "why bother". The market is heavily competitive. Like I mentioned when Ray Tracing first released Nvidia capitalized on it with "Nvidia Only" RTX games/ads. Because of this technology and ad pushed they took back the GPU market, just as tariffs hit which has literally crippled the industry with supply and demand. And again if you're trying to get good sales on your product you may not want to have bad performance in your "recommended" hardware. Especially when competing with other renderers.

Actually similar could be had with Terragen. A good amount of support topics regarding CTD is just hardware -- usually RAM (as I've even proven Terragen can scale down to a cellular phone, with windows emulated without virtualization, so CPU isn't really an issue at all actually).

Terragen going GPU I could see it also going CUDA and not even bothering with OpenCL because, again, significantly lower performance (not so much for gaming but definitely seen in rendering), and like you mentioned probably harder to maintain, and really just a whole other aspect to maintain.

But to be left out again, like it's the 90s and the GPU compatibility wars all over again, is pretty deplorable in 2021. We should have API standards honestly.

Also as for hardware usage, it is true from what I read like last year that about half their users were on AMD GPUs, and a good portion of that on AMD CPU/GPU combos.

PabloMack

Quote from: WAS on May 01, 2021, 12:13:17 AMWe should have API standards honestly.
You don't get standards on bleeding edge technology. The standards usually come later when a preponderance of developers understand what needs to be done through hindsight from looking at what has already been done. Otherwise, you end up with bad standards or standards that don't work.

WAS

Yeah that's definitely true I guess, but I mean, we've kinda been all over the place, even since SGI.

mhall

Quote from: WAS on May 01, 2021, 12:13:17 AMI dunno, I think it's really just performance and them going "why bother".
I know this is an old post, but I figured that, since Blender 3.0 is about to be released (December) and they managed (with major help in terms of funding and programming from AMD) to get support for AMD cards into the initial release of 3.0, you may want to know.


You seemed a bit heated about the idea that AMD support would take longer than Nvidia, thought this might be welcome news.

https://code.blender.org/2021/11/next-level-support-for-amd-gpus/

This should also bring AMD support back to Macs as well (though Apple is helping them support the new chip architecture via funding and programming as well).

WAS

I have 3.0 beta, and 3.1 alpha and GPU is still disabled in both, so onlu RDNA2 is supported not RDNA1 or HD cards. Not currently able to afford RDNA2 at 2.5x the price of my card for same level of performance..