It depends on how many instructions need to be processed per pixel.
Think about it like this: let's say you have a scene with a reflective ball in it, surrounded by trees and mountains. On the trees are some image-based textures and a simple reflective shader and some translucency for added realism. On the ground and mountains is a 10 layer Surface Shader network for some decent texturing. You have a single light source, the sun, and you are also doing real Global Illumination.
Now, we want to calculate a single pixel on that reflective ball. First, we figure out if it's visible to the camera, we run some calculations for that. Then we have to calculate the lighting and occlusion of *every surrounding object* as well as the terrain so we know if the ball is in shadow. Then we calculate the reflection and we have to query *every surrounding object and surface, including the terrain* to see if it is part of the reflection (so surfaces, 10 Surface Layers per test, plus objects, at least 3 shader functions to test for each, multiplied by all the geometry of that object, etc.). Then we calculate the GI and however many bounces of light there are, which interacts with the surface colors and lighting of all other surfaces, so that's a huge amount of calculations, and for additional bounces it adds up a lot. We add that to the lighting and surface calculation when completed. Finally we can maybe call the result of this *one pixel* "finished". 1000s or even millions of calculations must be done per pixel, due to the complexity of the scene and of the phenomena we are trying to realistically portray.
Does it make a bit more sense now?
- Oshyan