Tuesday, March 23, 2021

What is a Transformation Matrix


Homogeneous coordinates

Until then, we only considered 3D vertices as a (x,y,z) triplet. Let’s introduce w. We will now have (x,y,z,w) vectors.

This will be more clear soon, but for now, just remember this :

  • If w == 1, then the vector (x,y,z,1) is a position in space.
  • If w == 0, then the vector (x,y,z,0) is a direction.



An introduction to matrices


Simply put, a matrix is an array of numbers with a predefined number of rows and colums.


In 3D graphics we will mostly use 4x4 matrices. They will allow us to transform our (x,y,z,w) vertices. This is done by multiplying the vertex with the matrix :


Matrix x Vertex (in this order !!) = TransformedVertex


Translation matrices


These are the most simple tranformation matrices to understand. A translation matrix look like this :


1, 0, 0, X

0, 1, 0, Y

0, 0, 0, Z 

0, 0, 0, 1


where X,Y,Z are the values that you want to add to your position.


So if we want to translate the Vector (10,10,10,1) of 10 units in X direction, below is what to be done 


1,0,0,10

0,1,0,0

0,0,1,0

0,0,0,1 


X 


10,

10,

10,

1 


= 


1*10 + 0 * 10 + 0 * 10 + 10 *1 

0 * 10 + 1 * 10 + 0 * 10 + 0 * 1

0 * 10 + 0 * 0 + 1 * 0 + 0 * 0 

0 * 10 + 0 * 10 + 0 * 10 + 1 * 1 


= 


20

10,

10,

1 



The Identity matrix

This one is special. It doesn’t do anything. 


Scaling matrices

So if you want to scale a vector (position or direction, it doesn’t matter) by 2.0 in all directions :


2, 0, 0, 0 

0, 2, 0, 0

0, 0, 2, 0 

0, 0, 0, 1


X 


X

Y,

Z,

W 


= 


2 * X + 0 * Y + 0 * Z + 0 * W 

0 * X + 2 * Y + 0 * Z + 0 * W

0 * X + 0 * Y + 2 * Z + 0 * W

0 * X + 0 * Y + 0 * Z + 1 * W


= 


2 *X 

2 * Y 

2 * Z 

W 




References

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/




What is Projection Matrix

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).


camera_frustum.png


In the perspective projection the relation between the depth value and the z distance to the camera is not linear.

A perspective projection matrix looks like this:


r = right, l = left, b = bottom, t = top, n = near, f = far


2*n/(r-l)      0              0               0

0              2*n/(t-b)      0               0

(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1    

0              0              -2*f*n/(f-n)    0



From this follows the relation between the z coordinate in view space and the normalized device coordinates z component and the depth.:


z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye

depth = (z_ndc + 1.0) / 2.0



The reverse operation looks like this:


n = near, f = far


z_ndc = 2.0 * depth - 1.0;

z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));


If the perspective projection matrix is known this can be done as follows:


A = prj_mat[2][2]

B = prj_mat[3][2]

z_eye = B / (A + z_ndc)



The realtion between the projected area in view space and the Z coordinate of the view space is linear. It dpends on the field of view angle and the aspect ratio.


The normaized dievice size can be transformed to a size in view space like this:

aspect = w / h

tanFov = tan( fov_y * 0.5 );


size_x = ndx_size_x * (tanFov * aspect) * z_eye;

size_y = ndx_size_y * tanFov * z_eye;



if the perspective projection matrix is known and the projection is symmetrically (the line of sight is in the center of the viewport and the field of view is not displaced), this can be done as follows:


size_x = ndx_size_x * / (prj_mat[0][0] * z_eye);

size_y = ndx_size_y * / (prj_mat[1][1] * z_eye);



Note each position in normalized device coordinates can be transformed to view space coordinates by the inverse projection matrix:


mat4 inversePrjMat = inverse( prjMat );

vec4 viewPosH      = inversePrjMat * vec3( ndc_x, ndc_y, 2.0 * depth - 1.0, 1.0 );

vec3 viewPos       = viewPos.xyz / viewPos.w;



This means the unprojected rectangle with a specific depth, can be calculated like this


vec4 viewLowerLeftH  = inversePrjMat * vec3( -1.0, -1.0, 2.0 * depth - 1.0, 1.0 );

vec4 viewUpperRightH = inversePrjMat * vec3(  1.0,  1.0, 2.0 * depth - 1.0, 1.0 );

vec3 viewLowerLeft   = viewLowerLeftH.xyz / viewLowerLeftH.w;

vec3 viewUpperRight  = viewUpperRightH.xyz / viewUpperRightH.w;



References:

https://stackoverflow.com/questions/46578529/how-to-compute-the-size-of-the-rectangle-that-is-visible-to-the-camera-at-a-give

Monday, March 22, 2021

Converting Lat lon to Mercator coordinate

The Mercator map projection is a special limiting case of the Lambert Conic Conformal map projection with the equator as the single standard parallel. All other parallels of latitude are straight lines and the meridians are also straight lines at right angles to the equator, equally spaced. It is the basis for the transverse and oblique forms of the projection. It is little used for land mapping purposes but is in almost universal use for navigation charts. As well as being conformal, it has the particular property that straight lines drawn on it are lines of constant bearing. Thus navigators may derive their course from the angle the straight course line makes with the meridians. [1.]



The formulas to derive projected Easting and Northing coordinates from spherical latitude φ and longitude λ are:


E = FE + R (λ – λₒ)

N = FN + R ln[tan(π/4 + φ/2)]   


where λO is the longitude of natural origin and FE and FN are false easting and false northing. In spherical Mercator those values are actually not used, so you can simplify the formula to


Pseudo code example, so this can be adapted to every programming language.


latitude    = 41.145556; // (φ)

longitude   = -73.995;   // (λ)


mapWidth    = 200;

mapHeight   = 100;


// get x value

x = (longitude+180)*(mapWidth/360)


// convert from degrees to radians

latRad = latitude*PI/180;


// get y value

mercN = ln(tan((PI/4)+(latRad/2)));

y     = (mapHeight/2)-(mapWidth*mercN/(2*PI));




References:

https://amp-reddit-com.cdn.ampproject.org/v/s/amp.reddit.com/r/educationalgifs/comments/5lhk8y/how_the_mercator_projection_distorts_the_poles/?usqp=mq331AQJCAEoAVgBgAEB&amp_js_v=0.  => Nice video on Mercator projection 

https://stackoverflow.com/questions/14329691/convert-latitude-longitude-point-to-a-pixels-x-y-on-mercator-projection

Friday, March 19, 2021

Mapbox GL ThreeJS Overlay - Display two buildings and draw lines between them

Two models can be loaded like this below 


 var loader = new THREE.GLTFLoader();

          loader.load(

            // "https://docs.mapbox.com/mapbox-gl-js/assets/34M_17/34M_17.gltf",

            "./models/building.glb",

            function (gltf) {

              gltf.scene.position.set(70, 70, 0);

              this.scene.add(gltf.scene);

              mesh1 = gltf.scene;

              updateLine();

            }.bind(this)

          );

          loader.load(

            "./models/building.glb",

            function (gltf) {

              this.scene.add(gltf.scene);

              mesh2 = gltf.scene;


              updateLine();

            }.bind(this)

          );


Once loaded, the line can be drawn like this. this is a very basic example so it uses variables to know if the 

Objects are loaded. 


Now to draw the line, below is the code segment 


function updateLine() {

            console.log(" -- updateLine ENTRY --", mesh1 + ", " + mesh2);

            if (mesh1 && mesh2) {

              const material = new THREE.LineBasicMaterial({

                color: 0x0000ff,

              });


              const points = [];

              let mesh1LinePos = mesh1.position;

              //   mesh1LinePos.z = mesh1LinePos.y + 20;

              points.push(mesh1LinePos);

              let mesh2LinePos = mesh2.position;

              //   mesh2LinePos.z = mesh2LinePos.y + 20;

              points.push(mesh2LinePos);


              const geometry = new THREE.BufferGeometry().setFromPoints(points);


              const line = new THREE.Line(geometry, material);

              sceneRef.add(line);

            }

          }



References:

https://docs.mapbox.com/mapbox-gl-js/example/add-3d-model/

What is Mercator Coordinate System

The Universal Transverse Mercator (UTM) is a system for assigning coordinates to locations on the surface of the Earth. Like the traditional method of latitude and longitude, it is a horizontal position representation, which means it ignores altitude and treats the earth as a perfect ellipsoid. However, it differs from global latitude/longitude in that it divides earth into 60 zones and projects each to the plane as a basis for its coordinates. Specifying a location means specifying the zone and the xy coordinate in that plane. The projection from spheroid to a UTM zone is some parameterization of the transverse Mercator projection. The parameters vary by nation or region or mapping system.

Most zones in UTM span 6 degrees of longitude, and each has a designated central meridian. The scale factor at the central meridian is specified to be 0.9996 of true scale for most UTM systems in use.[1][2]


The National Oceanic and Atmospheric Administration (NOAA) website states that the system was developed by the United States Army Corps of Engineers, starting in the early 1940s. However, a series of aerial photos found in the Bundesarchiv-Militärarchiv (the military section of the German Federal Archives) apparently dating from 1943–1944 bear the inscription UTMREF followed by grid letters and digits, and projected according to the transverse Mercator,[4] a finding that would indicate that something called the UTM Reference system was developed in the 1942–43 time frame by the Wehrmacht. It was probably carried out by the Abteilung für Luftbildwesen (Department for Aerial Photography). From 1947 onward the US Army employed a very similar system, but with the now-standard 0.9996 scale factor at the central meridian as opposed to the German 1.0.[4] For areas within the contiguous United States the Clarke Ellipsoid of 1866[5] was used. For the remaining areas of Earth, including Hawaii, the International Ellipsoid[6] was used. The World Geodetic System WGS84 ellipsoid is now generally used to model the Earth in the UTM coordinate system, which means current UTM northing at a given point can differ up to 200 meters from the old. For different geographic regions, other datum systems can be used.

Prior to the development of the Universal Transverse Mercator coordinate system, several European nations demonstrated the utility of grid-based conformal maps by mapping their territory during the interwar period. Calculating the distance between two points on these maps could be performed more easily in the field (using the Pythagorean theorem) than was possible using the trigonometric formulas required under the graticule-based system of latitude and longitude. In the post-war years, these concepts were extended into the Universal Transverse Mercator/Universal Polar Stereographic (UTM/UPS) coordinate system, which is a global (or universal) system of grid-based maps.

The transverse Mercator projection is a variant of the Mercator projection, which was originally developed by the Flemish geographer and cartographer Gerardus Mercator, in 1570. This projection is conformal, which means it preserves angles and therefore shapes across small regions. However, it distorts distance and area.



References:

https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system

Thursday, March 18, 2021

What are different Anti Aliasing

Pretty much all AA these days is MSAA or some tweaky optimized version of it

Super-Sampled Anti-Aliasing (SSAA). The oldest trick in the book - I list it as universal because you can use it pretty much anywhere: forward or deferred rendering, it also anti-aliases alpha cutouts, and it gives you better texture sampling at high anisotropy too. Basically, you render the image at a higher resolution and down-sample with a filter when done. Sharp edges become anti-aliased as they are down-sized. Of course, there's a reason why people don't use SSAA: it costs a fortune. Whatever your fill rate bill, it's 4x for even minimal SSAA.


Multi-Sampled Anti-Aliasing (MSAA). This is what you typically have in hardware on a modern graphics card. The graphics card renders to a surface that is larger than the final image, but in shading each "cluster" of samples (that will end up in a single pixel on the final screen) the pixel shader is run only once. We save a ton of fill rate, but we still burn memory bandwidth. This technique does not anti-alias any effects coming out of the shader, because the shader runs at 1x, so alpha cutouts are jagged. This is the most common way to run a forward-rendering game. MSAA does not work for a deferred renderer because lighting decisions are made after the MSAA is "resolved" (down-sized) to its final image size.


Coverage Sample Anti-Aliasing (CSAA). A further optimization on MSAA from NVidia [ed: ATI has an equivalent]. Besides running the shader at 1x and the framebuffer at 4x, the GPU's rasterizer is run at 16x. So while the depth buffer produces better anti-aliasing, the intermediate shades of blending produced are even better.



References:

https://gaming.stackexchange.com/questions/31801/what-are-the-differences-between-the-different-anti-aliasing-multisampling-set



Understanding perspective camera in ThreeJS

The link gives a very great illustration of three JS camera. Hats off to the author. 


Essentially these are the notable points: 


  1. Fov when adjusts , the size of the object varies. When FOV is more, the size is little. While increasing FOV, the position of the camera and as well position of the object does not move. Below two figures show fov of X and when increased from X to a higher value. 





  1. We can adjust the Z position of the camera. When doing this, the near and far plane comes into picture. When near far plane is before the actual object location, the object becomes invisible. Below two pictures shows this








  1. When the Y position of the camera is adjusted, the object appears to be above or below the 0 axes of Y. Below two pictures for +ve increment and -ve increment 








References :

https://observablehq.com/@grantcuster/understanding-scale-and-the-three-js-perspective-camera

Wednesday, March 17, 2021

Troubles with renewing Lets Encrypt certificates

 [ec2-user@ip-172-31-31-171 letsencrypt]$ sudo certbot renew

Saving debug log to /var/log/letsencrypt/letsencrypt.log


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Processing /etc/letsencrypt/renewal/kgf-api.kgf.com.conf

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Cert is due for renewal, auto-renewing...

Could not choose appropriate plugin: The manual plugin is not working; there may be problems with your existing configuration.

The error was: PluginError('An authentication script must be provided with --manual-auth-hook when using the manual plugin non-interactively.',)

Attempting to renew cert (kgf-api.kgf.com) from /etc/letsencrypt/renewal/kgf-api.kgf.com.conf produced an unexpected error: The manual plugin is not working; there may be problems with your existing configuration.

The error was: PluginError('An authentication script must be provided with --manual-auth-hook when using the manual plugin non-interactively.',). Skipping.

All renewal attempts failed. The following certs could not be renewed:

  /etc/letsencrypt/live/kgf-api.kgf.com/fullchain.pem (failure)


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


All renewal attempts failed. The following certs could not be renewed:

  /etc/letsencrypt/live/kgf-api.kgf.com/fullchain.pem (failure)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

1 renew failure(s), 0 parse failure(s)

[ec2-user@ip-172-31-31-171 letsencrypt]$ 



Now executing the below line gave the cert renewed, but for authentication, had to configure the server to return the 



[ec2-user@ip-172-31-31-171 renewal]$ sudo certbot certonly --manual -d kgf-api.kgf.com

Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator manual, Installer None

Cert is due for renewal, auto-renewing...

Renewing an existing certificate

Performing the following challenges:

http-01 challenge for kgf-api.kgf.com


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.


Are you OK with your IP being logged?

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

(Y)es/(N)o: Y


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Create a file containing just this data:


XQwR7HJ00JL7mFwOnmfkvLBGdDUMqlL5wsCdAId2SPg.Yie0huA1hMl-2udhDAN4lC6B-Gb-x_x5qGnMAeIVIaY


And make it available on your web server at this URL:


http://kgf-api.kgf.com/.well-known/acme-challenge/XQwR7HJ00JL7mFwOnmfkvLBGdDUMqlL5wsCdAId2SPg



/.well-known/acme-challenge/XQwR7HJ00JL7mFwOnmfkvLBGdDUMqlL5wsCdAId2SPg


location /.well-known/acme-challenge/XQwR7HJ00JL7mFwOnmfkvLBGdDUMqlL5wsCdAId2SPg {

    root /usr/local/var/www;

}



- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Press Enter to Continue




references: 


Tuesday, March 16, 2021

iOS How to get the current Ad SDK version

Below can be used to get this version information 

NSLog("Ad Mob request version :\(GADRequest.sdkVersion())")

this prints something like this below 

Ad Mob request version :afma-sdk-i-v7.57.0

references:

https://stackoverflow.com/questions/48203987/how-to-check-current-sdk-version-of-google-admob-ios


Tuesday, March 9, 2021

ThreeJS Gamma Correction since r112


What is the correct way to get gamma corrected output from a Three.js scene with no textures? For example a scene with geometries of various solid coloured Basic, Lambert and Standard materials, lights and shadows and I want to correct for gamma 2.2.


This should be used for new ones

renderer.outputEncoding = THREE.sRGBEncoding;


Normally, JPEG textures are sRGB encoded. So if you want to retain the original color as good as possible, you should define the texture’s encoding if you load it manually like so:

texture.encoding = THREE.sRGBEncoding;


The renderer can then decode the texels into linear space for lighting equations in the shader. Notice, that this only happens for built-in materials. When implementing custom materials, you have to perform the decode by yourself.


Custom shader materials are not automatically part of the renderer’s output encoding process. This only happens if you include the encodings_fragment shader chunk into your fragment shader code.


References:

https://discourse.threejs.org/t/canonical-gamma-correction-since-r112/11756