One might assume a CRT effect would naturally only be applied to the whole screen (if you treat it as one "view" = one tv or monitor "screen") so my guess is the shader just uses the application surface for convenience. It captures the whole image, and fills the whole image, by default unless you really take control on the graphics side. CRT effects are most likely "post-processing", since it wants to happen after everything has been drawn to the screen (bar UI elements, which wouldn't want the effect, so are drawn last to be on "top" of it).
It's likely to use the application surface, as it is the same size as the view, and everything gets drawn to this surface at various stages throughout the graphics process. With the draw process having defined steps, this is admittedly an assumption that appears to fit quite neatly.
However: It seems you might want to have a view within a view, a "five nights at freddies" kind of set-up where you see a screen in each corner, and some have CRT effects whilst others are clearer pictures, or different timing for distortions etc.
I think it would be something like this:
1) create a surface that's 600 x 400 in size
2) draw to that, the area you want to show filtered
3) send that surface to the shader
4) draw the results of the shader to the application surface, or the view, at the position you want
5) repeat for however many "screens" you want to show: if you intend for each to have a different effect, then each has to be sent to the shader with altered parameters (level of noise for distortions, graininess, lighting, saturation, whatever)
It's not that much different from what you have - you just have to figure out changing the source, and content, of what is being passed to the shader.
If it does indeed use the application surface, then the drawing aspects (points 1 & 2) will be missing, since it most likely happens at a draw stage when GMS has already automatically processed the various visuals into a combined image (the application surface, post processing step)
No code will exist for creating this surface (that's always handled by the engine), and it's also possible that there's no code involved for drawing to it (the default setting is it being handled by the engine, but it can be handled by the user) as you'd simply send it to the shader during the correct draw step.
Point 4 would also be redundant, since
(a) the application surface covers the whole screen, as does the intended effect in the original example
(b) the surface being sent to the shader is...the application surface. Once it's returned with the shader transforms, it's automatically drawn afterwards.
Hopefully that helps you figure it out