Sunday, September 27, 2020

ThreeJS how to create a Plane, and a sphere, box sit on top of it

Basically need to Create BoxGeometry, ShereGeometry and a PlaneGeometry 

Below is the basic code for it. 




<script type="module">


import * as THREE from './threejs/build/three.module.js';


import { GUI } from './threejs/examples/jsm/libs/dat.gui.module.js';


import { OrbitControls } from './threejs/examples/jsm/controls/OrbitControls.js';


var renderer, scene, camera;


var spotLight, lightHelper, shadowCameraHelper;


var gui;


renderer = new THREE.WebGLRenderer();

renderer.setPixelRatio(window.devicePixelRatio);

renderer.setSize(window.innerWidth, window.innerHeight);

document.body.appendChild(renderer.domElement);


camera = new THREE.PerspectiveCamera(35, window.innerWidth / window.innerHeight, 300, 10000);


scene = new THREE.Scene();


//LIGHTS 

var light = new THREE.AmbientLight(0xfffff, 0.5);

scene.add(light);


var light2 = new THREE.PointLight(0xffffff, 0.5);

scene.add(light2);


var material = new THREE.MeshBasicMaterial({

color: 0xff0000,

transparent: true,

opacity: 1,

wireframe: true

});


var geometry = new THREE.BoxGeometry(100, 100, 100);

var mesh = new THREE.Mesh(geometry, material);

mesh.position.z = -1000;

mesh.position.x = -100;

scene.add(mesh);


var geometry2 = new THREE.SphereGeometry(50, 20, 20);

var mesh2 = new THREE.Mesh(geometry2, material);

mesh2.position.z = -1000;

mesh2.position.x = -100;

scene.add(mesh2);


var geometry3 = new THREE.PlaneGeometry(10000, 1000, 100, 100);

var mesh3 = new THREE.Mesh(geometry3, material);

mesh3.rotation.x = -90 * Math.PI / 180;

mesh3.position.y = -100;

scene.add(mesh3);


render();


function render() {

mesh.rotation.x += 0.01;

mesh.rotation.y += 0.01;


renderer.render(scene, camera);

requestAnimationFrame(render);

}



</script>


References:

https://www.youtube.com/watch?v=HsE_7C1tRTo&list=PL08jItIqOb2qyMOhtEUoLh100KpccQiRf&index=4


ThreeJS MeshLambertMaterial



A material for non-shiny surfaces, without specular highlights.


The material uses a non-physically based Lambertian model for calculating reflectance. This can simulate some surfaces (such as untreated wood or stone) well, but cannot simulate shiny surfaces with specular highlights (such as varnished wood).


Shading is calculated using a Gouraud shading model. This calculates shading per vertex (i.e. in the vertex shader) and interpolates the results over the polygon's faces


Due to the simplicity of the reflectance and illumination models, performance will be greater when using this material over the MeshPhongMaterial, MeshStandardMaterial or MeshPhysicalMaterial, at the cost of some graphical accuracy.




References:

https://threejs.org/docs/#api/en/materials/MeshLambertMaterial


ThreeJS what is point light and ambient light

 

A light that gets emitted from a single point in all directions. A common use case for this is to replicate the light emitted from a bare lightbulb.


var light = new THREE.PointLight( 0xff0000, 1, 100 );

light.position.set( 50, 50, 50 );

scene.add( light );


AmbientLight

This light globally illuminates all objects in the scene equally.


This light cannot be used to cast shadows as it does not have a direction.


var light = new THREE.AmbientLight( 0x404040 ); // soft white light

scene.add( light );



References:

https://threejs.org/docs/#api/en/lights/PointLight


ThreeJS what are different light sources



AmbientLight

This light globally illuminates all objects in the scene equally.

This light cannot be used to cast shadows as it does not have a direction.


var light = new THREE.AmbientLight( 0x404040 ); // soft white light

scene.add( light );


AmbientLightProbe

Light probes are an alternative way of adding light to a 3D scene. AmbientLightProbe is the light estimation data of a single ambient light in the scene


Light probes are an alternative way of adding light to a 3D scene. Unlike classical light sources (e.g. directional, point or spot lights), light probes do not emit light. Instead they store information about light passing through 3D space. During rendering, the light that hits a 3D object is approximated by using the data from the light probe.


Light probes are usually created from (radiance) environment maps. The class LightProbeGenerator can be used to create light probes from instances of CubeTexture or WebGLCubeRenderTarget. However, light estimation data could also be provided in other forms e.g. by WebXR. This enables the rendering of augmented reality content that reacts to real world lighting.


The current probe implementation in three.js supports so-called diffuse light probes. This type of light probe is functionally equivalent to an irradiance environment map.


DirectionalLight

A light that gets emitted in a specific direction. This light will behave as though it is infinitely far away and the rays produced from it are all parallel. The common use case for this is to simulate daylight; the sun is far enough away that its position can be considered to be infinite, and all light rays coming from it are parallel.



A Note about Position, Target and rotation

A common point of confusion for directional lights is that setting the rotation has no effect. This is because three.js's DirectionalLight is the equivalent to what is often called a 'Target Direct Light' in other applications.


This means that its direction is calculated as pointing from the light's position to the target's position (as opposed to a 'Free Direct Light' that just has a rotation component).



// White directional light at half intensity shining from the top.

var directionalLight = new THREE.DirectionalLight( 0xffffff, 0.5 );

scene.add( directionalLight );


HemisphereLight

A light source positioned directly above the scene, with color fading from the sky color to the ground color.


This light cannot be used to cast shadows.



var light = new THREE.HemisphereLight( 0xffffbb, 0x080820, 1 );

scene.add( light );



HemisphereLightProbe

Light probes are an alternative way of adding light to a 3D scene. HemisphereLightProbe is the light estimation data of a single hemisphere light in the scene. For more information about light probes, go to LightProbe.





Light

Abstract base class for lights - all other light types inherit the properties and methods described here.



PointLight

A light that gets emitted from a single point in all directions. A common use case for this is to replicate the light emitted from a bare lightbulb.


This light can cast shadows - see PointLightShadow


RectAreaLight

RectAreaLight emits light uniformly across the face a rectangular plane. This light type can be used to simulate light sources such as bright windows or strip lighting.


Important Notes:


There is no shadow support.

Only MeshStandardMaterial and MeshPhysicalMaterial are supported.

You have to include RectAreaLightUniformsLib into your scene and call init().



var width = 10;

var height = 10;

var intensity = 1;

var rectLight = new THREE.RectAreaLight( 0xffffff, intensity,  width, height );

rectLight.position.set( 5, 5, 0 );

rectLight.lookAt( 0, 0, 0 );

scene.add( rectLight )


rectLightHelper = new THREE.RectAreaLightHelper( rectLight );

rectLight.add( rectLightHelper );


SpotLight

This light gets emitted from a single point in one direction, along a cone that increases in size the further from the light it gets.


This light can cast shadows - see the SpotLightShadow


// white spotlight shining from the side, casting a shadow


var spotLight = new THREE.SpotLight( 0xffffff );

spotLight.position.set( 100, 1000, 100 );


spotLight.castShadow = true;


spotLight.shadow.mapSize.width = 1024;

spotLight.shadow.mapSize.height = 1024;


spotLight.shadow.camera.near = 500;

spotLight.shadow.camera.far = 4000;

spotLight.shadow.camera.fov = 30;


scene.add( spotLight );





Referecens: 

https://threejs.org/docs/#api/en/lights/

 

Saturday, September 26, 2020

What is glTF file format

glTF (derivative short form of Graphics Library Transmission Format or GL Transmission Format) is a file format for 3D scenes and models using the JSON standard. It is an API-neutral runtime asset delivery format developed by the Khronos Group 3D Formats Working Group. It was announced at HTML5DevConf 2016. This format is intended to be an efficient, interoperable format with minimum file size and runtime processing by apps. As such, its creators have described it as the "JPEG of 3D." glTF also defines a common publishing format for 3D content tools and services



References:

https://en.wikipedia.org/wiki/GlTF


ThreeJS gILF loader


Below was the code executed. 

import { GLTFLoader } from './threejs/examples/jsm/loaders/GLTFLoader.js';


var loader = new GLTFLoader();

loader.load(

// resource URL

'./models/BoomBox.gltf',

// called when the resource is loaded

function (gltf) {

scene.add(gltf.scene);

},

// called while loading is progressing

function (xhr) {

console.log((xhr.loaded / xhr.total * 100) + '% loaded');

},

// called when loading has errors

function (error) {

console.log('An error happened', error);

}

);



But this was getting into error like this below 


An error happened Error: THREE.GLTFLoader: Attempting to load .dds texture without importing DDSLoader

    at new GLTFTextureDDSExtension (GLTFLoader.js:415)

    at GLTFLoader.parse (GLTFLoader.js:318)

    at Object.onLoad (GLTFLoader.js:159)

    at XMLHttpRequest.<anonymous> (three.module.js:36508)



This error was not present for some of the gilt files, 

However even if simple Duck gltf, below was happening


three.js:35 THREE.Object3D.add: object not an instance of THREE.Object3D. Group {uuid: "197AE6DC-104B-4563-9FAA-E49078A3484F", name: "", type: "Group", parent: null, children: Array(1), …}



References:

https://threejs.org/docs/#examples/en/loaders/GLTFLoader


Threejs loading models

3D models are available in hundreds of file formats, each with different purposes, assorted features, and varying complexity. Although three.js provides many loaders, choosing the right format and workflow will save time and frustration later on. Some formats are difficult to work with, inefficient for realtime experiences, or simply not fully supported at this time.



Where possible, we recommend using glTF (GL Transmission Format). Both .GLB and .GLTF versions of the format are well supported. Because glTF is focused on runtime asset delivery, it is compact to transmit and fast to load. Features include meshes, materials, textures, skins, skeletons, morph targets, animations, lights, and cameras.


Below are some of the tool vendors who support 


Blender by the Blender Foundation

Substance Painter by Allegorithmic

Modo by Foundry

Toolbag by Marmoset

Houdini by SideFX

Cinema 4D by MAXON

COLLADA2GLTF by the Khronos Group

FBX2GLTF by Facebook

OBJ2GLTF by Analytical Graphics Inc


When glTF is not an option, popular formats such as FBX, OBJ, or COLLADA are also available and regularly maintained.


Loading

Only a few loaders (e.g. ObjectLoader) are included by default with three.js — others should be added to your app individually.


import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';


Once you've imported a loader, you're ready to add a model to your scene. Syntax varies among different loaders — when using another format, check the examples and documentation for that loader. For glTF, usage with global scripts would be:



import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';

var loader = new GLTFLoader();

loader.load( 'path/to/model.glb', function ( gltf ) {


scene.add( gltf.scene );


}, undefined, function ( error ) {


console.error( error );


} );


Troubleshooting

You've spent hours modeling an artisanal masterpiece, you load it into the webpage, and — oh no! 😭 It's distorted, miscolored, or missing entirely. Start with these troubleshooting steps:


Check the JavaScript console for errors, and make sure you've used an onError callback when calling .load() to log the result.

View the model in another application. For glTF, drag-and-drop viewers are available for three.js and babylon.js. If the model appears correctly in one or more applications, file a bug against three.js. If the model cannot be shown in any application, we strongly encourage filing a bug with the application used to create the model.

Try scaling the model up or down by a factor of 1000. Many models are scaled differently, and large models may not appear if the camera is inside the model.

Try to add and position a light source. The model may be hidden in the dark.

Look for failed texture requests in the network tab, like C:\\Path\To\Model\texture.jpg. Use paths relative to your model instead, such as images/texture.jpg — this may require editing the model file in a text editor.



References:

https://threejs.org/docs/#manual/en/introduction/Loading-3D-models

Wednesday, September 23, 2020

Data visualisation in Tableau , D3, R, Python

A well thought out visualization peels back the layers surrounding a raw dataset.

Data visualisation in R

The creation of the ggplot2 library has made R the go-to tool for data visualization (for programmers at least!). I started my own data science journey using R and was instantly enthralled by the beauty and power of ggplot.

The BBC data team has actually developed and released an R package and an R cookbook for generating visualizations like the one above. The R package is called bbplot. It provides functions for creating and exporting visualizations made in ggplot in the style used by the BBC data team.

references:

https://www.analyticsvidhya.com/blog/2019/08/11-data-visualizations-python-r-tableau-d3js/

The mu editor for python programming

 In the early days of my Python adventure, I used IDLE, Python's integrated development environment. It was much easier than entering commands into the Python shell, plus I could write and save programs for later use. I took some online courses and read many excellent books about Python programming. I taught teachers and students how to create turtle graphics using IDLE.


IDLE was a big improvement, but at PyConUS 2019 in Cleveland, I saw a presentation by Nicholas Tollervey that changed the way I learned and taught Python. Nick is an educator who created Mu, a Python editor specifically for young programmers (and even older ones like me). Mu can be installed on Linux, macOS, and Windows. It's easy to use and comes with excellent documentation and tutorials.


references:

https://opensource.com/article/20/9/teach-python-mu

Sunday, September 20, 2020

A quick threejs 3D model

Three.js is one of the most popular JavaScript libraries for creating and animating 3D computer graphics in a web browser using WebGL. It’s also a great tool for creating 3D games for web browsers.


Since Three.js is based on JavaScript, it’s relatively easy to add any interactivity between 3D objects and user interfaces, such as keyboard and mouse. This makes the library perfectly suitable for making 3D games on the web.


There are main advantages, one of them is, Three.js has built-in PBR rendering, which makes rendering graphics more accurate


Below are some of the Cons for this library


No rendering pipeline: This makes a lot of modern rendering techniques impossible/infeasible to implement with Three.js

Not a game engine: If you’re looking for features beyond rendering – you won’t find many here

Geared toward novices: Because the API caters to novices, many advanced features are hidden

Lack of support: There is no built-in support for spatial indexing, making exact raycasting, or frustum culling, and collision detection is hopelessly inefficient in complex scenarios


import * as THREE from 'js/three.module.js';


var camera, scene, renderer;

var geometry, material, mesh;


animate();


function init() {

  const camera = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, .01, 20 );

  camera.position.z = 1;


  const scene = new THREE.Scene();


  const geometry = new THREE.BoxGeometry( 0.5, 0.5, 0.5 );

  const material = new THREE.MeshNormalMaterial();


  const mesh = new THREE.Mesh( geometry, material );

  scene.add( mesh );


  const renderer = new THREE.WebGLRenderer( { antialias: true } );

  renderer.setSize( window.innerWidth, window.innerHeight );

  document.body.appendChild( renderer.domElement );

}



function animate() {

    init();

    requestAnimationFrame( animate );

    mesh.rotation.x += .01;

    mesh.rotation.y += .02;

    renderer.render( scene, camera );

}



References:

https://blog.logrocket.com/top-6-javascript-and-html5-game-engines/

Why should stick to react

The purpose of Web Components

Web Components is a set of different technologies that are used together to help developers write UI elements that are semantic, reusable, and properly isolated.

A custom element is a way of defining an HTML element by using the browser’s JavaScript API. A custom element has its own semantic tag and lifecycle methods, similar to a React component

Shadow DOM is a way of isolating DOM elements and scoping CSS locally to prevent breaking changes from affecting other elements.

HTML template is a way of writing invisible HTML elements that acts as a template that you can operate on using JavaScript’s query selector.


The shadow DOM example

Shadow DOM allows you to write HTML elements that were scoped from the actual DOM tree. It’s attached to the parent element, but won’t be considered as its child element by the browser. Here is an example:



Any code you write inside the shadow DOM will be encapsulated from the code outside of it. One of the benefits of using shadow DOM is that any CSS code you write will be local and won’t affect any element outside of it.

When you inspect the shadow element, you’ll see it marked with #shadow-root:


The browser will return null when you try to select the shadow button with document.getElementById(‘shadow-button’) . You need to select the parent element and grab its shadowRoot object first:


const el = document.getElementById('example').shadowRoot

el.getElementById('shadow-button') 


HTML template example

HTML template allows you to write invisible HTML elements that you can iterate through with JavaScript in order to display dynamic data. To write one, you need to wrap the elements inside a <template> tag:


<template id="people-template">

  <li>

    <span class="name"></span> &mdash; 

    <span class="age"></span>

  </li>

</template>

<ul id="people"></ul>




References

https://blog.bitsrc.io/web-component-why-you-should-stick-to-react-c56d879a30e1



Linux 2 : What does the file name red with black background indicate

What you have there is a dangling symlink, or a symlink pointing to a file or directory which no longer exists.

A symlink itself really has no filesize, because it isn't a file. Symlinks are stored within the inodes themselves, meaning they have no real contents or size, but are instead pointers to other files on the disk.

The output of file libCLHEP-Exceptions-2.1.3.1.a should reveal where it's pointing to.


references

https://superuser.com/questions/543397/what-does-it-mean-a-red-filename-shown-with-black-background

Sails - Specify SSL certificate

 SSL/TLS (transport-layer security) is critical for preventing potential man-in-the-middle attacks. Without a protocol like SSL/TLS, web basics like securely transmitting login credentials and credit card numbers would be much more complicated and troublesome. SSL/TLS is not only important for HTTP requests (https://), it's also necessary for WebSockets (over wss://). Fortunately, you only need to worry about configuring SSL settings in one place: sails.config.ssl.


SSL and load balancers

#

The sails.config.ssl setting is only relevant if you want your Sails process to manage SSL. This isn't always the case. For example, if you expect your Sails app to get more traffic over time, it will need to scale to multiple servers, necessitating a load balancer. Most of the time, for performance and simplicity, it is a good idea to terminate SSL at your load balancer. If you do that, then since SSL/TLS will have already been dealt with before packets reach your Sails app, you won't need to use the sails.config.ssl setting at all. (This is also true if you're using a PaaS like Heroku, or almost any other host with a built-in load balancer.)



Use sails.config.ssl to set up basic SSL server options, or to indicate that you will be specifying more advanced options in sails.config.http.serverOptions.


If you specify a dictionary, it should contain both key and cert keys, _or_ a pfx key. The presence of those options indicates to Sails that your app should be lifted with an HTTPS server. If your app requires a more complex SSL setup (for example by using SNICallback), set sails.config.ssl to true and specify your advanced options in sails.config.http.serverOptions.


SSL configuration example


we'll assume you created a folder in your project, config/ssl/ and dumped your certificate/key files inside. Then, in one of your config files, include the following:


ssl: {

  ca: require('fs').readFileSync(require('path').resolve(__dirname,'../ssl/my-gd-bundle.crt')),

  key: require('fs').readFileSync(require('path').resolve(__dirname,'../ssl/my-ssl.key')),

  cert: require('fs').readFileSync(require('path').resolve(__dirname,'../ssl/my-ssl.crt'))

}



references:

https://sailsjs.com/documentation/reference/configuration/sails-config


Saturday, September 19, 2020

Lets Encrypt, How to assign certificate to Certbot

It was then taking to the below page for software selection that is used in the app 

https://certbot.eff.org/lets-encrypt/arch-other

1. SSH into the server

SSH into the server running your HTTP website as a user with sudo privileges.

#change to our home directory

cd

# Download and install the "Extra Packages for Enterprise Linux (EPEL)"

wget -O epel.rpm –nv https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

sudo yum install -y ./epel.rpm

# Install certbot for Apache (part of EPEL)

sudo yum install python2-certbot-apache.noarch

After this we should have the certbot in the path 

sudo certbot -a manual --preferred-challenges dns -d appname-api.mydomain.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Please deploy a DNS TXT record under the name

_acme-challenge.appname-api.mydomain.com with the following value:

3PrB7_2ddfl2q6oe7sh5PCVPNQudY3ZANezB4h5mfIU

the TXT record needs to be added with the key as domain name. 

references:

https://certbot.eff.org/

https://blog.lawrencemcdaniel.com/letsencrypt-amazon-linux2-apache/


Lets encrypt : DNS problem: NXDOMAIN looking up TXT for _acme-challenge

 

I ran this command:~$

sudo certbot certonly --server https://acme-v02.api.letsencrypt.org/directory --manual --preferred-challenges dns -d 'ccvitaal.nl,*.ccvitaal.nl'

Waiting for verification…

Cleaning up challenges

Failed authorization procedure. ccvitaal.nl (dns-01): urn:ietf:params:acme:error:dns :: DNS problem: NXDOMAIN looking up TXT for _acme-challenge.ccvitaal.nl

IMPORTANT NOTES:

The following errors were reported by the server:

Domain: ccvitaal.nl

Type: None

Detail: DNS problem: NXDOMAIN looking up TXT for

_acme-challenge.ccvitaal.nl

this is because the TXT records are not added with right key

You are adding those txt records to subdomain _acme-challenge.ccvitaal.nl but you should add them to subdomain _acme-challenge. Using below command should give you the last added txt records: dig _acme-challenge.ccvitaal.nl txt Right now…

references:

https://community.letsencrypt.org/t/solved-dns-problem-nxdomain-looking-up-txt-for-acme-challenge/70102


Vue 3 - what's in it?

I like it’s still a priority that Vue can be used with just a <script> tag with no build process at all. But it’s ready for build processes too.

Vue 3.0 core can still be used via a simple <script> tag, but its internals has been re-written from the ground up into a collection of decoupled modules. The new architecture provides better maintainability, and allows end users to shave off up to half of the runtime size via tree-shaking.


If you specifically want to have a play with Vue Single File Components (SFCs, as they say, .vue files), we support them on CodePen in our purpose-built code editor for them. Go into Pen Settings > JavaScript and flip Vue 2 to Vue 3.


The train keeps moving too. This proposal to expose all component state to CSS is an awfully cool idea. I really like the idea of CSS having access to everything that is going on on a site. Stuff like global scroll and mouse position would be super cool. All the state happening on any given component? Heck yeah I’ll take it.


references:

https://css-tricks.com/vue-3/

What is Vue.js

Vue (pronounced /vjuː/, like view) is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is easy to pick up and integrate with other libraries or existing projects. On the other hand, Vue is also perfectly capable of powering sophisticated Single-Page Applications when used in combination with modern tooling and supporting libraries.

Approachable

Already know HTML, CSS and JavaScript? Read the guide and start building things in no time!

Versatile

An incrementally adoptable ecosystem that scales between a library and a full-featured framework.

Performant

20KB min+gzip Runtime

Blazing Fast Virtual DOM

Minimal Optimization Efforts


Declarative Rendering

At the core of Vue.js is a system that enables us to declaratively render data to the DOM using straightforward template syntax:


<div id="counter">

  Counter: {{ counter }}

</div>



const Counter = {

  data() {

    return {

      counter: 0

    }

  }

}

Vue.createApp(Counter).mount('#counter')

We have already created our very first Vue app! This looks pretty similar to rendering a string template, but Vue has done a lot of work under the hood. The data and the DOM are now linked, and everything is now reactive. How do we know? Take a look at the example below where counter property increments every second and you will see how rendered DOM changes:


references:

https://v3.vuejs.org/

Automatic recovery of EC2 instance using CloudWatch

This is to recover an EC2 instance that becomes unhealthy Navigate to Ec2 dashboard, choose the instance that we want to setup automatic recovery for

then Actions -> Cloudwatch monitoring -> Add/Edit Alarms 



Will be prompted to create an alarm 

The screen is like below. 



Now after an alarm is set, this can be viewed in the alarms section, which looks like below 

references:

https://youtu.be/GCTroP5bduA

Firebase Analytics, How to add custom parameter reporting

Google Analytics for Firebase lets you specify up to 25 custom parameters per event

You can also identify up to 100 custom event parameters per project (50 numeric and 50 text) to include in reporting by registering those parameters with their corresponding events. Once you register your custom parameters, Google Analytics for Firebase displays a corresponding data card in each related event-detail report.


To register custom parameters for an event:


In Analytics for Firebase, navigate to your app.

Click Events.

In the row for the event you want to modify, click More > Edit parameter reporting.

In the Enter parameter name field, enter the name of the parameter you'd like to register.

If a match is found, select it in the list and click ADD.

If no match is found, click ADD.

Set the Type field to Text or Number. For numeric parameters, set the Unit of Measurement field.

Click SAVE, then click CONFIRM.

On the Events page, any event with registered parameters has a gray box next to the event name with the number of registered parameters for that event.


To edit registered parameters:


In the row for the event, click More > Edit parameter reporting.

Add new parameters per the instructions above, or click Delete to remove a parameter.

Click SAVE, then click CONFIRM.

The per-project count for registered parameters appears at the bottom of the list of parameters. As you enter parameters, the count updates. When the quota has been reached (100), that number appears in red, indicating that you cannot register any more.


When you register custom parameters, a data card for each parameter is added to the related event-detail report. However, it may take up to 24 hours for the data card to appear. During this 24-hour period, you may see (not set) appear as a parameter value. Once that initial 24-hour period has passed, you will see the expected parameter values from that point forward.


References:

https://support.google.com/firebase/answer/7397304?hl=en


Firebase Auth - Displays Domain Not verified

 This error may occur when you host your app in no ssl certified domain. Then you have to whitelist your domain in firebase console.

Go to Firebase Console -> Authentication -> sign-in-method -> Authorized Domains and add your domain.

By default localhostand any https:// domain is whitelisted.

references:

https://stackoverflow.com/questions/46578267/hostname-match-not-found-error-in-firebase-phone-authenticationwith-ionic#:~:text=5%20Answers&text=This%20error%20may%20occur%20when,Domains%20and%20add%20your%20domain.

Thursday, September 17, 2020

Are we able to attach SSL to IP address?

An SSL certificate is typically issued to a Fully Qualified Domain Name (FQDN) such as "https://www.domain.com". However, some organizations need an SSL certificate issued to a public IP address. This option allows you to specify a public IP address as the Common Name in your Certificate Signing Request (CSR). The issued certificate can then be used to secure connections directly with the public IP address (e.g., https://123.456.78.99.).

The C/A Browser forum sets what is and is not valid in a certificate, and what CA's should reject.

According to their Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates document, CAs must, since 2015, not issue certificats where the common name, or common alternate names fields contains a reserved IP or internal name, where reserved IP addresses are IPs that IANA has listed as reserved - which includes all NAT IPs - and internal names are any names that don't resolve on the public DNS.


Public IP addresses CAN be used (and the baseline requirements doc specifies what kinds of checks a CA must perform to ensure the applicant owns the IP).



references:

https://stackoverflow.com/questions/2043617/is-it-possible-to-have-ssl-certificate-for-ip-address-not-domain-name#:~:text=An%20SSL%20certificate%20is%20typically,Certificate%20Signing%20Request%20(CSR).

Firebase Analytics

Analytics automatically logs some events for you; you don't need to add any code to receive them. If your app needs to collect additional data, you can log up to 500 different Analytics Event types in your app. There is no limit on the total volume of events your app logs. Note that event names are case-sensitive and that logging two events whose names differ only in case will result in two distinct events.


After you have configured the firebase.analytics() instance, you can begin to log events with the logEvent() method. If you're already familiar with Google Analytics, this method is equivalent to using the event command in gtag.js.


To get started, the Analytics SDK defines a number of suggested events that are common among different types of apps, including retail and ecommerce, travel, and gaming apps


references:

https://firebase.google.com/docs/analytics/events


Material UI How to set different color for typography

The recommend approach is withStyles as shown below.


import React from "react";

import { withStyles } from "@material-ui/core/styles";

import Typography from "@material-ui/core/Typography";


const WhiteTextTypography = withStyles({

  root: {

    color: "#FFFFFF"

  }

})(Typography);


export default function App() {

  return (

    <div className="App" style={{ backgroundColor: "black" }}>

      <WhiteTextTypography variant="h3">

        This text should be white

      </WhiteTextTypography>

    </div>

  );

}



references:


Some of the cool new UI designs

The link below gives a few cool design ideas 

References:

https://uxplanet.org/top-ui-ux-design-inspiration-121-9ab59fd3e1ff

Using joblib to speed up your Python pipelines

Why joblib?

There are several reasons to integrate joblib tools as a part of the ML pipeline. There are major two reasons mentioned on their website to use it. However, I thought to rephrase it again:

Capability to use cache which avoids recomputation of some of the steps

Execute Parallelization to fully utilize all the cores of CPU/GPU.


Beyond this, there are several other reasons why I would recommend joblib:

Can be easily integrated

No specific dependencies

Saves cost and time

Easy to learn


1. Using Cached results

Basically, store the computed results in memory 

from joblib import Memory


# Define a location to store cache

location = '~/Desktop/temp/cache_dir'

memory = Memory(location, verbose=0)


result = []


# Function to compute square of a range of a number:

def get_square_range_cached(start_no, end_no):

    for i in np.arange(start_no, end_no):

        time.sleep(1)

        result.append(square_number(i))

    return result


get_square_range_cached = memory.cache(get_square_range_cached)


start = time.time()

# Getting square of 1 to 50:

final_result = get_square_range_cached(1, 21)

end = time.time()


# Total time to compute

print('\nThe function took {:.2f} s to compute.'.format(end - start))

print(final_result)



To clear the cache results just do the below 


memory.clear(warn=False)



2. Parallelization


As the name suggests, we can compute in parallel any specified function with even multiple arguments using “joblib.Parallel”. Behind the scenes, when using multiple jobs (if specified), each calculation does not wait for the previous one to complete and can use different processors to get the task done. For better understanding, I have shown how Parallel jobs can be run inside caching.


#Import package

from joblib import Parallel, delayed

from joblib import Memory


location = 'C:/Users/pg021/Desktop/temp/cache_dir'

memory = Memory(location, verbose=0)

costly_compute_cached = memory.cache(costly_compute)


def data_processing_mean_using_cache(data, column):

    """Compute the mean of a column."""

    return costly_compute_cached(data, column).mean()


start = time.time()

results = Parallel(n_jobs=2)(

    delayed(data_processing_mean_using_cache)(data, col)

    for col in range(data.shape[1]))

stop = time.time()


print('Elapsed time for the entire processing: {:.2f} s'

      .format(stop - start))



Here we can see that time for processing using the Parallel method was reduced by 2x.



3. Dump and Load


We often need to store and load the datasets, models, computed results, etc. to and from a location on the computer. Joblib provides functions that can be used to dump and load easily:


4. Compression methods


Supported ones are: 


a. Simple Compression:

b. Using Zlib compression:

c. Using lz4 compression:




I find joblib to be a really useful library. I have started integrating them into a lot of my Machine Learning Pipelines and definitely seeing a lot of improvements.


References:

https://towardsdatascience.com/using-joblib-to-speed-up-your-python-pipelines-dd97440c653d




Slide Model power point template

The 6-Item Flower Diagram PowerPoint Template is a circular process cycle depicting flower petals. It contains infographic clipart icons over each segment of the circular diagram. These icons will assist in visualizing the items being represented through a creative 6 steps diagram design. It is a multi-purpose process flow template giving a refreshing look to your business concepts and models


It is something like this 





References:

https://slidemodel.com/templates/6-item-flower-diagram-powerpoint-template/


Android 11 - Supported phones

Several smartphone brands including Xiaomi, Oppo, Realme etc, are now fast-tracking to bring Android 11 on their new models.


The new version of OS brings a variety of new features to the mobile operating system with the biggest change being that the management of conversations by grouping notifications from messaging applications. Android 11 introduced Bubbles, a new feature to help you respond and engage with important conversations without switching back and forth between your current task and the messaging app.

Android 11 also brings features like screen recorder, updated Voice Access, improved performance, and an improved share menu that makes it easier to share content from your phone.

References:

https://www.livemint.com/technology/apps/android-11-is-here-check-if-you-can-install-google-os-on-your-phone-right-now-11600073361107.html


Tuesday, September 15, 2020

ThreeJS what is OrbitControl

Orbit controls allow the camera to orbit around a target.

To use this, as with all files in the /examples directory, you will have to include the file seperately in your HTML.


var renderer = new THREE.WebGLRenderer();

renderer.setSize(window.innerWidth, window.innderHeight);

document.body.appendChild(renderer.domElement);


var scene = new THREE.scene();


var camera = new THREE.perspectiveCamera(45, window.innerWidth, window.innerHeight, 1, 10000);


var orbitControls = new OrbitControls(camera, renderer.domElement);

camera.position.set(0,20,100);

controls.update();


function animate(){


requestAnimationFrame(true);

controls.update();

Renderer.render(scene,camera);


}


The constructor accepts multiple parameters 

Camera is The camera to be controlled. The camera must not be a child of another object, unless that object is the scene itself.


The domElement is The HTML element used for event listener 


There are multiple properties such as 


.autoRotate : Set to true to automatically rotate around the target. Note that if this is enabled, you must call .update () in your animation loop.

.autoRotateSpeed : How fast to rotate around the target if .autoRotate : Booleanis true. Default is 2.0, which equates to 30 seconds per rotation at 60fps.

.dampingFactor : The damping inertia used if .enableDamping : Booleanis set to true. Note that for this to work, you must call .update () in your animation loop.

.domElement : The HTMLDOMElement used to listen for mouse / touch events. This must be passed in the constructor; changing it here will not set up new event listeners.

.enabled  : When set to false, the controls will not respond to user input. Default is true.

.enableDamping : Set to true to enable damping (inertia), which can be used to give a sense of weight to the controls. Default is false. Note that if this is enabled, you must call .update () in your animation loop.

.enableKeys : Enable or disable the use of keyboard controls.

.enablePan: Enable or disable camera panning. Default is true.

.enableRotate: Enable or disable horizontal and vertical rotation of the camera. Default is true. Note that it is possible to disable a single axis by setting the min and max of the polar angle or azimuth angle to the same value, which will cause the vertical or horizontal rotation to be fixed at that value.

.enableZoom : Enable or disable zooming (dollying) of the camera.

.keyPanSpeed : How fast to pan the camera when the keyboard is used. Default is 7.0 pixels per keypress.

.keys : This object contains references to the keycodes for controlling camera panning. Default is the 4 arrow keys.

controls.keys = {

LEFT: 37, //left arrow

UP: 38, // up arrow

RIGHT: 39, // right arrow

BOTTOM: 40 // down arrow

}

.maxAzimuthAngle: How far you can orbit horizontally, upper limit. If set, the interval [ min, max ] must be a sub-interval of [ - 2 PI, 2 PI ], with ( max - min < 2 PI ). Default is Infinity.

.maxDistance : How far you can dolly out ( PerspectiveCamera only ). Default is Infinity.

.maxPolarAngle : How far you can orbit vertically, upper limit. Range is 0 to Math.PI radians, and default is Math.PI.

.maxZoom : How far you can zoom out ( OrthographicCamera only ). Default is Infinity.

.minAzimuthAngle : How far you can orbit horizontally, lower limit. If set, the interval [ min, max ] must be a sub-interval of [ - 2 PI, 2 PI ], with ( max - min < 2 PI ). Default is Infinity.

.minDistance: How far you can dolly in ( PerspectiveCamera only ). Default is 0

.minPolarAngle: How far you can orbit vertically, lower limit. Range is 0 to Math.PI radians, and default is 0.

.minZoom: How far you can zoom in ( OrthographicCamera only ). Default is 0.

.mouseButtons: This object contains references to the mouse actions used by the controls

controls.mouseButtons = {

LEFT: THREE.MOUSE.ROTATE,

MIDDLE: THREE.MOUSE.DOLLY,

RIGHT: THREE.MOUSE.PAN

}


.panSpeed : Float

Speed of panning. Default is 1.

.position0 : Vector3

Used internally by the .saveState : saveStateand .reset : resetmethods.

.rotateSpeed : Float

Speed of rotation. Default is 1.

.screenSpacePanning : Boolean

Defines how the camera's position is translated when panning. If true, the camera pans in screen space. Otherwise, the camera pans in the plane orthogonal to the camera's up direction. Default is true for OrbitControls; false for MapControls.

.target0 : Vector3

Used internally by the .saveState : saveStateand .reset : resetmethods.

.target : Vector3

The focus point of the controls, the .object orbits around this. It can be updated manually at any point to change the focus of the controls.

.touches : Object

This object contains references to the touch actions used by the controls.

controls.touches = {

ONE: THREE.TOUCH.ROTATE,

TWO: THREE.TOUCH.DOLLY_PAN

}

.zoom0 : Float

Used internally by the .saveState : saveStateand .reset : resetmethods.

.zoomSpeed : Float

Speed of zooming / dollying. Default is 1.



Below are the functions: 

.dispose () Remove all the event listeners. 

.getAzimuthalAngle () : Get the current horizontal rotation, in radians.

.getPolarAngle (): Get the current vertical rotation, in radians.

.reset () : Reset the controls to their state from either the last time the .saveState was called, or the initial state.

.saveState : Save the current state of the controls. This can later be recovered with .reset.

.update ()  : Update the controls. Must be called after any manual changes to the camera's transform, or in the update loop if .autoRotate or .enableDamping are set.




References:

https://threejs.org/docs/#examples/en/controls/OrbitControls

Monday, September 14, 2020

Javascript How to implement Sleep

const sleep = (milliseconds) => {

  return new Promise(resolve => setTimeout(resolve, milliseconds))

}

/*Use like so*/

async function timeSensativeAction(){ //must be async func

  //do something here

  await sleep(5000) //wait 5 seconds

  //continue on...

}

References:

https://www.codegrepper.com/code-examples/javascript/sleep+in+react+js




iOS Swift get device model number

func modelIdentifier() -> String {

    if let simulatorModelIdentifier = ProcessInfo().environment["SIMULATOR_MODEL_IDENTIFIER"] { return simulatorModelIdentifier }

    var sysinfo = utsname()

    uname(&sysinfo) // ignore return value

    return String(bytes: Data(bytes: &sysinfo.machine, count: Int(_SYS_NAMELEN)), encoding: .ascii)!.trimmingCharacters(in: .controlCharacters)

}

references:

https://stackoverflow.com/questions/26028918/how-to-determine-the-current-iphone-device-model

Material UI - How to do localisation for MUiPicker

import {ru} from 'date-fns/esm/locale'


<MuiPickersUtilsProvider locale={ru} utils={DateFnsUtils}>

          <KeyboardDatePicker

            id="mui-pickers-date"

            label="Appointment Date"

            value={selectedDate}

            onChange={handleDateChange}

            className={classes.textFieldDropdown}

            KeyboardButtonProps={{

              'aria-label': 'change date',

            }}

          />

         </MuiPickersUtilsProvider>

         <TextField

          error={errorStatus.slotID}

          select

          label="Time Slot"

          className={classes.textFieldDropdown}

          value={appointmentFields.slotID}

          onChange={handleChange('slotID')}

        >


references:

https://stackoverflow.com/questions/58677350/how-to-change-the-language-for-keyboarddatepicker-material-ui


Javascript how to save the file to local PC

const handleSaveToPC = jsonData => {

    const fileData = JSON.stringify(jsonData);

    const blob = new Blob([fileData], { type: "text/plain" });

    const url = URL.createObjectURL(blob);

    const link = document.createElement('a');

    link.download = 'filename.json';

    link.href = url;

    link.click();

  }


References: 

https://stackoverflow.com/questions/53449406/write-to-a-text-or-json-file-in-react-with-node

Javascript - Upload files using React JS

This is very good resource for looking into the file operations. 


function onFileInputChange(event) {

    var file = event.target.files[0];

    console.log('File Input change ', file);

    const reader = new FileReader();

    reader.onload = (e) => {

      console.log('reader on load called ', e.target.result);

      handleSaveToPC(e.target.result)

    };

    reader.readAsText(file);

}


<input

        accept="text/*"

        className={classes.input}

        id="contained-button-file"

        multiple

        type="file"

        onChange={onFileInputChange}

      />



<label htmlFor="contained-button-file">

        <Button variant="outlined" component="span" color='primary' className={classes.button}>

          Upload

        </Button>

      </label>


References:

https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications


iOS 14 new changes:

Watch OS 7, Beta 8

Watch app has some update

iOS 14 beta 8 is available to developers via an over-the-air update in the Settings app. As usual, if the update does not immediately appear for download, keep checking as it sometimes takes a while to roll out to all registered developers. The update features the build number 18A5373a for iPhone users and comes in at just over 100MB.


iOS 14 betas have made a variety of changes to the operating system Apple introduced at WWDC in June, including a new Calendar app icon, new widgets for things like TV and Files, updates to the time picker wheel, Music app changes, and much more.


Apple yesterday confirmed its September special event for September 15. At the event, Apple is likely to reveal the final release date for iOS 14 and its other annual software releases.


Track the iOS 14 beta changes in our detailed hands-on videos below:


iOS 14 adds widgets to the home screen of the iPhone and iPad for the first time. Widgets are more data-rich than ever and come in a variety of sizes. Apps move out of the way automatically to make room for the widgets. You access these widgets through the “Widget Gallery,” with multiple different sizing options.


iOS 14 also provides support for picture in picture, works very similar to the iPad experience. Meanwhile, Siri has a new interface that does not overtake the entire screen.




A new translate app in iOS 14 is designed for conversations and works completely offline. All you have to do is tap on the microphone icon and the app will translate to your chosen language. There will be 11 languages supported at launch.


iOS 14 also adds a new App Clip feature to easily access applications quickly without downloading the full version from the App Store.



pple is also updating the Voice Memos app with iPhone and macOS Big Sur. The most exciting feature this year is the new Enhance Recording functionality. The Enhance Recording feature reduces background noise and room reverberation with a single tap. Once you create a recording, you’ll see a small icon similar to the Auto adjustment feature in the Photos app. Details on exactly how this feature works are unclear, but Apple is heavily promoting the simplicity of the proce




References:

https://www.youtube.com/watch?v=5luZMdrmTL4

Which all industries get affected by AR/VR

Mainly the VR in virtual tour etc. 

Industries use this approach to detect problems early. 

References:

https://www.zdnet.com/video/facebook-forcing-oculus-users-to-have-an-account-on-its-platform/

How do a Hex view of a file

In Mac, the utility is "xxd filename | less"

This shows the data pretty good. something like below 

00000000: 7b0a 2020 226e 616d 6522 3a20 2272 722d  {.  "name": "rr-

00000010: 7265 6163 742d 626f 696c 6572 706c 6174  react-boilerplat

00000020: 6522 2c0a 2020 2276 6572 7369 6f6e 223a  e",.  "version":


If we see that the contents are paired as 4 chars, but each of the 

char is taking two bytes. Just to note.  


References:

https://stackoverflow.com/questions/827326/whats-a-good-hex-editor-viewer-for-the-mac

Saturday, September 12, 2020

What is Global In Node JS

Node.js global objects are global in nature and available in all modules. You don't need to include these objects in your application; rather they can be used directly. These objects are modules, functions, strings and object etc. Some of these objects aren't actually in the global scope but in the module scope.

A list of Node.js global objects are given below:

__dirname

__filename

Console

Process

Buffer

setImmediate(callback[, arg][, ...])

setInterval(callback, delay[, arg][, ...])

setTimeout(callback, delay[, arg][, ...])

clearImmediate(immediateObject)

clearInterval(intervalObject)

clearTimeout(timeoutObject)

Node.js __dirname

It is a string. It specifies the name of the directory that currently contains the code.


references:

https://www.javatpoint.com/nodejs-global-objects#:~:text=js%20Global%20Objects-,Node.,functions%2C%20strings%20and%20object%20etc.

Wednesday, September 9, 2020

How much max bandwidth needed for VoIP?

The bandwidth that our VOIP phone service requires depends on the number of concurrent calls you want to make. The table below shows the minimum bandwidth required to make calls from a Phone.com account, as well as recommended speeds for optimal performance.


Number of Concurrent Calls Minimum Required Bandwidth Recommended speed

1 100 Kbps Up and Down 3 MBps Up and Down

3 300 Kbps Up and Down 3 MBps Up and Down

5 500 Kbps Up and Down 5 MBps Up and Down

10 1 MBps Up and Down 5-10 MBps Up and Down


The answer is simple and complex. VoIP services use a variety of codecs to compress and decompress voice data, allowing it to travel over the Internet efficiently. Phone.com uses codecs that require approximately 100 kilobits per second (kbps) traveling up from your phone line and down to your phone line per second for each call. So if you have three people, all on calls at the same time, the minimum requirement is 300 kbps up and 300 kbps down.

In addition, since the Internet “pipe” into your home or business is being used for other functions too—web browsing, sending and receiving email, file transfers, web-based office services, point-of-sale systems, and so on—there are numerous candidates contending for bandwidth.



How to Determine Your Functional Bandwidth


It helps to know how much bandwidth you really have. However, your Internet Service Provider (ISP) will probably only confirm what you signed up for, also known as the advertised “up to” value, as in “up to 50 Mbps” or “up to 150 Mbps.”


The best way to determine your bandwidth, is to run a throughput test using a site like www.speedtest.net. This will give you a snapshot of your current functional bandwidth, but it is important to note that this metric can vary depending on how much bandwidth all of the different applications you are using require at any given point in time. This test also provides variable results depending on the location used for testing.


Keep in mind that your upload speed is usually slower than your download speed, so you need to make sure that the lower number of the upload speed matches what you need. Since most service providers do not guarantee sustained bandwidth besides the up-to value, we recommend adding a 5x to 10x safety margin when estimating bandwidth.


If you know that your ISP can sustain a certain speed, simply multiply the number of expected concurrent calls by 100 kbps. If you deal with an “up to” ISP, a good solution would be to add the safety margin mentioned above so that you can sustain the required bandwidth, even when your Internet service falters.


For example, 10 concurrent users would require 1 Mbps (10 X 100 kbps x safety margin), which means you would be smart to allow for 5 to 10 Mbps both up and down. Depending on the other services and applications using your Internet connection and on the capabilities of your router, 3 to 5 Mbps may be sufficient, or you may need to increase your bandwidth. This must be evaluated on a case-by-case basis, as each organization is different.




References:

https://www.phone.com/much-bandwidth-need-voip/

How does lets encrypt work?

A nonprofit Certificate Authority providing TLS certificates to 225 million websites

To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to demonstrate control over the domain. With Let’s Encrypt, you do this using software that uses the ACME protocol which typically runs on your web host.




To figure out what method will work best for you, you will need to know whether you have shell access (also known as SSH access) to your web host. If you manage your website entirely through a control panel like cPanel, Plesk, or WordPress, there’s a good chance you don’t have shell access. You can ask your hosting provider to be sure.



With Shell Access

We recommend that most people with shell access use the Certbot ACME client. It can automate certificate issuance and installation with no downtime. It also has expert modes for people who don’t want autoconfiguration. It’s easy to use, works on many operating systems, and has great documentation. Visit the Certbot site to get customized instructions for your operating system and web server.


If Certbot does not meet your needs, or you’d like to try something else, there are many more ACME clients to choose from. Once you’ve chosen ACME client software, see the documentation for that client to proceed.


If you’re experimenting with different ACME clients, use our staging environment to avoid hitting rate limits.



references:

https://letsencrypt.org/


Wednesday, September 2, 2020

Material UI Styling

For the sake of simplicity, Material UI expose the styling solution used in Material-UI components as the @material-ui/styles package. You can use it, but you don't have to, since Material-UI is also interoperable with all the other major styling solutions.


n previous versions, Material-UI has used LESS, then a custom inline-style solution to write the component styles, but these approaches have proven to be limited. A CSS-in-JS solution overcomes many of those limitations, and unlocks many great features (theme nesting, dynamic styles, self-support, etc.).


Material-UI's styling solution is inspired by many other styling libraries such as styled-components and emotion.


You can expect the same advantages as styled-components.

🚀 It's blazing fast.

🧩 It's extensible via a plugin API.

⚡️ It uses JSS at its core – a high performance JavaScript to CSS compiler which works at runtime and server-side.

📦 Less than 15 KB gzipped; and no bundle size increase if used alongside Material-UI.



There are 3 possible APIs we can use to generate and apply styles, however they all share the same underlying logic.


Hook API 

=======


import React from 'react';

import { makeStyles } from '@material-ui/core/styles';

import Button from '@material-ui/core/Button';


const useStyles = makeStyles({

  root: {

    background: 'linear-gradient(45deg, #FE6B8B 30%, #FF8E53 90%)',

    border: 0,

    borderRadius: 3,

    boxShadow: '0 3px 5px 2px rgba(255, 105, 135, .3)',

    color: 'white',

    height: 48,

    padding: '0 30px',

  },

});


export default function Hook() {

  const classes = useStyles();

  return <Button className={classes.root}>Hook</Button>;

}


Styled components API

=====================


Note: this only applies to the calling syntax – style definitions still use a JSS object. You can also change this behavior, with some limitations.


import React from 'react';

import { styled } from '@material-ui/core/styles';

import Button from '@material-ui/core/Button';


const MyButton = styled(Button)({

  background: 'linear-gradient(45deg, #FE6B8B 30%, #FF8E53 90%)',

  border: 0,

  borderRadius: 3,

  boxShadow: '0 3px 5px 2px rgba(255, 105, 135, .3)',

  color: 'white',

  height: 48,

  padding: '0 30px',

});


export default function StyledComponents() {

  return <MyButton>Styled Components</MyButton>;

}


Higher-order component API

==========================

import React from 'react';

import PropTypes from 'prop-types';

import { withStyles } from '@material-ui/core/styles';

import Button from '@material-ui/core/Button';


const styles = {

  root: {

    background: 'linear-gradient(45deg, #FE6B8B 30%, #FF8E53 90%)',

    border: 0,

    borderRadius: 3,

    boxShadow: '0 3px 5px 2px rgba(255, 105, 135, .3)',

    color: 'white',

    height: 48,

    padding: '0 30px',

  },

};


function HigherOrderComponent(props) {

  const { classes } = props;

  return <Button className={classes.root}>Higher-order component</Button>;

}


HigherOrderComponent.propTypes = {

  classes: PropTypes.object.isRequired,

};


export default withStyles(styles)(HigherOrderComponent);



References:

https://material-ui.com/styles/basics/



What is Higher order component ?

A higher-order component (HOC) is an advanced technique in React for reusing component logic. HOCs are not part of the React API, per se. They are a pattern that emerges from React’s compositional nature.

Concretely, a higher-order component is a function that takes a component and returns a new component.

const EnhancedComponent = higherOrderComponent(WrappedComponent);

Whereas a component transforms props into UI, a higher-order component transforms a component into another component.

HOCs are common in third-party React libraries, such as Redux’s connect and Relay’s createFragmentContainer.

Components are the primary unit of code reuse in React. However, you’ll find that some patterns aren’t a straightforward fit for traditional components.

For example, say you have a CommentList component that subscribes to an external data source to render a list of comments:

References:

https://reactjs.org/docs/higher-order-components.html#:~:text=A%20higher%2Dorder%20component%20(HOC,React%20for%20reusing%20component%20logic.&text=They%20are%20a%20pattern%20that,and%20returns%20a%20new%20component.