How does the GPU handle pixels outside of the camera view

Hi, please imagine the scenario on the image bellow. The red rectangle represents a camer view. The blue rectangle is a simple mesh with four vertices. The plane is bigger than the view. My question is: Does the GPU process pixels which are outside the view? Does GPU call a fragment shader for these pixels or does it not?

Is a better solution to split the big mesh into more parts instead of having a big mesh with just four vertices? Does this help the GPU to recognize the pixels outside of the view?


Performance would be tiny bit better in the first case, as it has smaller number of vertices.
Only fragments that are on screen execute on the GPU, the pixels outside are cull out for free.
So don’t worry about optimizing pixels that are off-screen.


Thanks for explaining. I was not clear about this for a long time.