Matt should correct me if I'm wrong, as it's a tough subject, but I visualize the difference as follows:
Standard renderer = textures are evaluated and rendered on a per micropolygon basis = 1 shading point per micropolygon.
The higher the micropolydetail, the more faces, the more shading points, thus the smoother the result.
This is why without deferred rendering you need to render your scenes with micropolydetail >1, even up to 2.
The AA setting then tries to reduce noise between those shading points.
Deferred renderer = textures are calculated after micropolygons are generated, then the textures are being sampled using AA.
You can now have 64 shading points per micropolygon if you render with AA8(full), instead of 1. So you create a lot more texture info per micropolygon and it is also anti-aliased.
This is why with deferred rendering low micropolydetail values suffice in generating enough geometry for surface geometry detail and let the deferred pass based on AA setting do the same as increasing your micropolydetail to insane levels with the standard renderer.
The consequence is that any shader you designed for the standard renderer looks flat, simply because the "texture filtering" of the standard renderer is inferior.
Your sand looks flat now, because that's actually how your texture looks like when rendered properly. It looked grainy before, to some extent because of the renderer, not so much because of your shader.
The way I have explained this may be wrong, but I do know for sure it fits my observations and that this logic helps me make better choices for render settings.
I remember a topic by Mick Hazelgrove (mhaze) about high micropolydetail. There Matt and others shed some light on this. I'll see if I can find it, but you may beat me to it since you practically live here