Tuesday, March 23, 2021

What is Projection Matrix

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).


camera_frustum.png


In the perspective projection the relation between the depth value and the z distance to the camera is not linear.

A perspective projection matrix looks like this:


r = right, l = left, b = bottom, t = top, n = near, f = far


2*n/(r-l)      0              0               0

0              2*n/(t-b)      0               0

(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1    

0              0              -2*f*n/(f-n)    0



From this follows the relation between the z coordinate in view space and the normalized device coordinates z component and the depth.:


z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye

depth = (z_ndc + 1.0) / 2.0



The reverse operation looks like this:


n = near, f = far


z_ndc = 2.0 * depth - 1.0;

z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));


If the perspective projection matrix is known this can be done as follows:


A = prj_mat[2][2]

B = prj_mat[3][2]

z_eye = B / (A + z_ndc)



The realtion between the projected area in view space and the Z coordinate of the view space is linear. It dpends on the field of view angle and the aspect ratio.


The normaized dievice size can be transformed to a size in view space like this:

aspect = w / h

tanFov = tan( fov_y * 0.5 );


size_x = ndx_size_x * (tanFov * aspect) * z_eye;

size_y = ndx_size_y * tanFov * z_eye;



if the perspective projection matrix is known and the projection is symmetrically (the line of sight is in the center of the viewport and the field of view is not displaced), this can be done as follows:


size_x = ndx_size_x * / (prj_mat[0][0] * z_eye);

size_y = ndx_size_y * / (prj_mat[1][1] * z_eye);



Note each position in normalized device coordinates can be transformed to view space coordinates by the inverse projection matrix:


mat4 inversePrjMat = inverse( prjMat );

vec4 viewPosH      = inversePrjMat * vec3( ndc_x, ndc_y, 2.0 * depth - 1.0, 1.0 );

vec3 viewPos       = viewPos.xyz / viewPos.w;



This means the unprojected rectangle with a specific depth, can be calculated like this


vec4 viewLowerLeftH  = inversePrjMat * vec3( -1.0, -1.0, 2.0 * depth - 1.0, 1.0 );

vec4 viewUpperRightH = inversePrjMat * vec3(  1.0,  1.0, 2.0 * depth - 1.0, 1.0 );

vec3 viewLowerLeft   = viewLowerLeftH.xyz / viewLowerLeftH.w;

vec3 viewUpperRight  = viewUpperRightH.xyz / viewUpperRightH.w;



References:

https://stackoverflow.com/questions/46578529/how-to-compute-the-size-of-the-rectangle-that-is-visible-to-the-camera-at-a-give

No comments:

Post a Comment