I still don't get how any app can get normals from a photograph. It has to interpret colors or light/dark, and as soon as a feature is light, but low, it will be interpreted as high. So I don't really believe normal map or displacement map makers (unless from a 3D modeler of course), unless you check them carefully and rework if necessary.