Sunday, January 31, 2021

How to copy file from one user to another user in linux on same machine

sudo cp /home/USER1/FNAME /home/USER2/FNAME && sudo chown USER2:USER2 /home/USER2/FNAME

references:

https://askubuntu.com/questions/551047/copying-files-from-one-user-to-another-in-a-single-machine

What are various API Performance Metrics

 Response time — total processing time, latency

Throughput — Requests Per Second, request payloads, maximum operating capacity

Traffic composition — average and peak concurrent users

Database — number of concurrent connections, CPU utilization, read and write IOPS, memory consumption, disk storage requirements

Errors — handling, failure rates


To simulate concurrent users executing the same requests, you can use Newman to run parallel collections,

You can also tweak your Operating System limits to allow a higher number of concurrent TCP sockets. However, your network hardware may become a bottleneck if you wanted to simulate more users.



AWS How Cost Anomaly Detection works

Reduce cost surprises and enhance control without slowing innovation with AWS Cost Anomaly Detection. AWS Cost Anomaly Detection leverages advanced Machine Learning technologies to identify anomalous spend and root causes, so you can quickly take action. With three simple steps, you can create your own contexualized monitor and receive alerts when any anomalous spend is detected. Let builders build and let AWS Cost Anomaly Detection monitor your spend and reduce the risk of billing surprises.

Get started by creating AWS Cost Anomaly Detection via AWS Cost Explorer API, or directly in the Cost Management Console. Once you set up your monitor and alert preference, AWS will notify you with individual alerts or daily or weekly summary via SNS or emails. You can also monitor and do your own anomaly analysis in AWS Cost Explorer.

Step 1: Create a cost monitor

The cost monitor creation process allows you to create spend segments and evaluate spend anomalies in a preferred granular level. For example: an individual Linked Account, an individual Cost Category value, or an individual Cost Allocation tag.

Step 2: Set alert subscription

Once you have created your cost monitor, you can choose your alerting preference by setting up a dollar threshold (e.g. only alert on anomalies with impact greater than $1,000) . You don’t need to define an anomaly (e.g. percent or dollar increase) as Anomaly Detection does this automatically for you and adjusts over time. ).

Step 3: Done

Once cost monitors and alert subscriptions are created, you’re all set! Anomaly Detection will begin to work within 24 hours and you will be notified if any anomaly meets your alert threshold. You can visit your Anomaly Detection dashboard to monitor the activities, including anomalies detected that are below your alert threshold


references:

https://console.aws.amazon.com/cost-management/home?#/anomaly-detection/overview?activeTab=history


Why should set NODE_ENV variable to production

 A common convention in Node.js is that the NODE_ENV environment variable specifies the environment in which an application is running (usually, development or production).


For example, with express, setting NODE_ENV to “production” can improve performance by a factor of 3 according to the documentation. This enables:


Cache for view templates.

Cache for CSS files generated from CSS extensions.

Generate less verbose error messages.


references:

https://pm2.io/docs/runtime/best-practices/environment-variables/


Saturday, January 30, 2021

pm2 eco system file

Ecosystem process file

Any time you change the ecosystem process file, the environment variables will be updated.


Set Environment

To set default environment variables via ecosystem file, you just need to declare them under the “env:” attribute:



module.exports = {

  apps: [{

    name: "app",

    script: "./app.js",

    env: {

      NODE_ENV: "development"

    },

    env_test: {

      NODE_ENV: "test",

    },

    env_staging: {

      NODE_ENV: "staging",

    },

    env_production: {

      NODE_ENV: "production",

    }

  }]

}


Then start it:


$ pm2 start ecosystem.config.js


As you might have noticed, there is also a part about the env_test, env_staging and env_production in the ecosystem file.


For example to use the env_production variables instead of the default ones you just need to pass the --env <env_name> option:


$ pm2 start ecosystem.config.js --env production



references:

https://pm2.io/docs/runtime/best-practices/environment-variables/


Thursday, January 28, 2021

What is Web Worker

Web Workers are a simple means for web content to run scripts in background threads. The worker thread can perform tasks without interfering with the user interface. In addition, they can perform I/O using XMLHttpRequest (although the responseXML and channel attributes are always null) or fetch (with no such restrictions). Once created, a worker can send messages to the JavaScript code that created it by posting messages to an event handler specified by that code (and vice versa). This article provides a detailed introduction to using web workers.


A worker is an object created using a constructor (e.g. Worker()) that runs a named JavaScript file — this file contains the code that will run in the worker thread; workers run in another global context that is different from the current window. Thus, using the window shortcut to get the current global scope (instead of self) within a Worker will return an error.


You can run whatever code you like inside the worker thread, with some exceptions. For example, you can't directly manipulate the DOM from inside a worker, or use some default methods and properties of the window object. But you can use a large number of items available under window, including WebSockets, and data storage mechanisms like IndexedDB.


Data is sent between workers and the main thread via a system of messages — both sides send their messages using the postMessage() method, and respond to messages via the onmessage event handler (the message is contained within the Message event's data attribute.) The data is copied rather than shared.


Dedicated Workers 


As mentioned above, a dedicated worker is only accessible by the script that called it. 


Spawning a Dedicated worker 


Creating a new worker is simple. All you need to do is call the Worker() constructor, specifying the URI of a script to execute in the worker thread (main.js):


var myWorker = new Worker('worker.js');


The magic of workers happens via the postMessage() method and the onmessage event handler. When you want to send a message to the worker, you post messages to it like this (main.js):


first.onchange = function() {

  myWorker.postMessage([first.value,second.value]);

  console.log('Message posted to worker');

}


second.onchange = function() {

  myWorker.postMessage([first.value,second.value]);

  console.log('Message posted to worker');

}



So here we have two <input> elements represented by the variables first and second; when the value of either is changed, myWorker.postMessage([first.value,second.value]) is used to send the value inside both to the worker, as an array. You can send pretty much anything you like in the message.

In the worker, we can respond when the message is received by writing an event handler block like this (worker.js):



onmessage = function(e) {

  console.log('Message received from main script');

  var workerResult = 'Result: ' + (e.data[0] * e.data[1]);

  console.log('Posting message back to main script');

  postMessage(workerResult);

}


The onmessage handler allows us to run some code whenever a message is received, with the message itself being available in the message event's data attribute. Here we multiply together the two numbers then use postMessage() again, to post the result back to the main thread.

Back in the main thread, we use onmessage again, to respond to the message sent back from the worker:



myWorker.onmessage = function(e) {

  result.textContent = e.data;

  console.log('Message received from worker');

}


Terminating Worker 


If you need to immediately terminate a running worker from the main thread, you can do so by calling the worker's terminate method:


myWorker.terminate();




References:

https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers

Friday, January 22, 2021

Google play Install Referrer

You can use the Google Play Store's Install Referrer API to securely retrieve referral content from Google Play. 

Add the following line to the dependencies section of the build.gradle file for your app:

dependencies {

    ...

    implementation 'com.android.installreferrer:installreferrer:2.1'

}

Connecting to Google Play

Before you can use the Play Install Referrer API Library, you must establish a connection to the Play Store app using the following steps:

Call the newBuilder() method to create an instance of InstallReferrerClient class.

Call the startConnection() to establish a connection to Google Play.

The startConnection() method is asynchronous, so you must override InstallReferrerStateListener to receive a callback after startConnection() completes.

Override the onInstallReferrerSetupFinished() method to handle lost connections to Google Play. For example, the Play Install Referrer Library client may lose the connection if the Play Store service is updating in the background. The library client must call the startConnection() method to restart the connection before making further requests.

InstallReferrerClient referrerClient;

referrerClient = InstallReferrerClient.newBuilder(this).build();

referrerClient.startConnection(new InstallReferrerStateListener() {

    @Override

    public void onInstallReferrerSetupFinished(int responseCode) {

        switch (responseCode) {

            case InstallReferrerResponse.OK:

                // Connection established.

                break;

            case InstallReferrerResponse.FEATURE_NOT_SUPPORTED:

                // API not available on the current Play Store app.

                break;

            case InstallReferrerResponse.SERVICE_UNAVAILABLE:

                // Connection couldn't be established.

                break;

        }

    }


    @Override

    public void onInstallReferrerServiceDisconnected() {

        // Try to restart the connection on the next request to

        // Google Play by calling the startConnection() method.

    }

});


The following code demonstrates how you can access the install referrer information:


ReferrerDetails response = referrerClient.getInstallReferrer();

String referrerUrl = response.getInstallReferrer();

long referrerClickTime = response.getReferrerClickTimestampSeconds();

long appInstallTime = response.getInstallBeginTimestampSeconds();

boolean instantExperienceLaunched = response.getGooglePlayInstantParam();

Closing service connection

After getting referrer information, call the endConnection() method on your InstallReferrerClient instance to close the connection. Closing the connection will help you avoid leaks and performance problems.

references:

https://developer.android.com/google/play/installreferrer/library#java


Wednesday, January 20, 2021

What is OpenCTI

To display CTI functionality in Salesforce, Open CTI uses browsers as clients. With Open CTI, you can make calls from a softphone directly in Salesforce without installing CTI adapters on your machines. After you develop an Open CTI implementation, you can integrate it with Salesforce using Salesforce Call Center.


With Open CTI, you can:

  • Build CTI systems that integrate with Salesforce without the use of CTI adapters.
  • Create customizable softphones (call-control tools) that function as fully integrated parts of Salesforce and the Salesforce console.
  • Provide users with CTI systems that are browser and platform agnostic, for example, CTI for Microsoft® Internet Explorer®, Mozilla® Firefox®, Apple® Safari®, or Google Chrome™ on Mac, Linux, or Windows machines.



Open CTI integrates third-party CTI systems with Salesforce. But do you wonder what came before? Or what the difference is between Open CTI and Lightning Dialer?

What came before Open CTI?

Desktop CTI, also known as the CTI Toolkit, is the predecessor to Open CTI. Desktop CTI required adapters to be installed on each call center user’s machine. With Open CTI, those user-side adapters are a thing of the past.


Desktop CTI is retired and you must migrate to Open CTI. Work with your partners to create an Open CTI implementation.

What about Lightning Dialer?

If you’re confused between Lightning Dialer and Open CTI, don’t be. Lightning Dialer provides a way to provision numbers and make calls directly from Salesforce. However, if you already have a telephony system in place, Open CTI is the way to go since it integrates to that existing system.



References:

https://developer.salesforce.com/docs/atlas.en-us.api_cti.meta/api_cti/sforce_api_cti_intro.htm


Tuesday, January 19, 2021

What are mostly used Module Bundlers

Webpack is a static module bundler for the latest JavaScript applications. It works by building a dependency graph to map every module the project needs. In this way, it creates one or more handy bundles.

One of the main advantages of Webpack is that it is configurable to fit your specific needs.

Its key features are:

  • It generates output based on configurations. You can also define split points to create separate bundles within the project code itself.
  • Its flexibility gives you control over how to treat different assets encountered.
  • Webpack relies on loaders and plug-ins. Loaders operate on a module level, while plug-ins rely on hooks provided.
  • There is a high learning curve, but Webpack’s advantages make this worth it.



Rollup compiles small pieces of JavaScript code into libraries or applications that can be large and complex.

Because it uses a standardized code format with ES modules, you can seamlessly combine useful individual functions from separate libraries.

Its key features are:

  • It can optimize ES modules for faster native loading in the latest browsers.
  • It can “roll” and compress data into a single summary document for easy access.
  • It offers more elasticity because queries can identify the smallest available interval and use that for their processing.



Fusebox is highly customizable front-end development tool and comes with frequent updates. It is simple to configure and has powerful features.

It allows you to build an application quickly, with ease of use. Plug-ins allow you to employ anything that the Fusebox core doesn’t handle.

Its key features are:

  • It uses a TypeScript compiler by default along with a powerful cache system.
  • There is zero-configuration code-splitting, for a simple configuration syntax.
  • It supports an integrated task runner for its extensive set of plug-ins.
  • It has a built-in task runner, and the project automatically updates to reflect your changes.
  • EMS dynamic imports are also supported.



Parcel is a speedy, zero-configuration web app bundler. It uses worker processes for multicore compilation. It also has a filesystem cache for fast rebuilds.

With its simple, out-of-the-box operability, it improves performance and minimizes hassles during configuration.


Its key features are:

  • You get support for JS, CSS, HTML, file assets, and more without plug-ins.
  • With Babel, PostCSS, and PostHTML, code is automatically transformed.
  • It splits output bundles, so you only load what is needed initially.
  • It highlights code frames when it encounters errors, so you can pinpoint problems.


Browserify lets you bundle dependencies in the browser. You can write code that uses ‘require’ in the same way you would use it in Node.

It is simple, powerful, and easy to install. Its advantages are due to the ability to extend the Node ecosystem. It is flexible as it can be extended via plug-ins.

Its key features are:

  • It minimizes many pre-processing tasks with a transformative system.
  • It solves the problems of asynchronous loading.
  • It allows you to participate in the vast and growing NPM ecosystem.



References:

https://www.uplers.com/blog/5-best-task-runner-module-bundler-front-end-development-tools/

What is a Javascript Task Runner

In one word: automation. The less work you have to do when performing repetitive tasks like minification, compilation, unit testing, linting, etc, the easier your job becomes. After you've configured it through a Gruntfile, a task runner can do most of that mundane work for you—and your team—with basically zero effort.


The Grunt ecosystem is huge and it's growing every day. With literally hundreds of plugins to choose from, you can use Grunt to automate just about anything with a minimum of effort. If someone hasn't already built what you need, authoring and publishing your own Grunt plugin to npm is a breeze. 


Many of the tasks you need are already available as Grunt Plugins, and new plugins are published every day. 


Gulp lets you create efficient pipelines by taking advantage of the flexibility of JavaScript. It is supple, efficient, and provides you with speed and accuracy.


Gulp also has many community plug-ins, and there’s a good chance the process you need is already easily available.


Gulp’s key features are:

  • Aspects such as file watching are built-in.
  • Most plug-ins are simple and designed to do a single job.
  • It uses the JavaScript configuration code, which is simple and flexible.
  • It uses Node.js, so it can be faster.



Yarn has the reputation of being quick, secure, and reliable. In essence, what it does is to let you use and share JavaScript code with other developers from all over the globe.


You can report issues or add to development, too. When the problem is fixed, you can use Yarn to keep it all up to date.

Its key features are:

  • It uses the Hadoop operating system, which means processes will be resilient and capable of distributing very large data sets.
  • You can use the Hadoop cluster in a dynamic instead of static manner.
  • There is a central controlling authority which, among other things, allows for immense scalability.
  • It is highly compatible with existing Hadoop MapReduce applications.



Require is a JavaScript file optimized for in-browser use. It can also be used in other JavaScript environments. It manages the dependencies between files and improves the speed and quality of code.


It is also stable and provides support for plug-ins. It can easily load more than one JavaScript file at a time.

The key features of Require are:

  • It combines and streamlines modules into one script for optimal performance.
  • In the case of large-sized applications, it can reduce code complexity.
  • It can collect several JavaScript files from separate modules when compiling.
  • With React, debugging is simpler as it loads files from plain script tags.



Brunch focuses on the production of a small number of files that can be deployed from a large number of separate development trees. This front-end development tool provides smooth and quick work experience.


Brunch works across frameworks, libraries, and programming languages, which makes it very useful and flexible.

Its key features are:

  • The commands are simple and easy to execute.
  • There is support for node programs.
  • You get source maps from the start.
  • It has the ability to create fast-zero builds as well as incremental builds.




References:

https://gruntjs.com/

https://gulpjs.com/

When do we choose one export type vs another

Once you want to pull the logic into another file, you have to decide how you want to import it. There are a few options for how you want to import it:


ES6 Imports - if you want to use import AnimalApi from 'animal-api'

animal-api.js

export default {

    getDog: () => ....

    getCat: () => ....

    getGoat: () => ....

}


ES6 Destructured Import - if you want to use import { getDog, getCat, getGoat } from 'animal-api'


animal-api.js

export const getCat = () => ....

export const getDog = () => ....

export const getGoat = () => ....



CommonJS - if you want to use const AnimalApi = require('animal-api')

animal-api.js


module.exports = {

    getDog, getCat, getGoat

}


When would you choose one over the other?

If your app only needs to work in a browser, and only in the context of React (or Angular2+ or environment that uses ES6 modules), then you're fine with using an ES6 import.


If your lib is meant to be used in the browser, and you need to include it in a vanilla JS HTML app, you need to use a bundler like webpack to bundle your app as a lib.


If you use webpack and take advantage of code splitting and tree shaking, you can use ES6 Destructured imports. What this means is rather than include all of lodash in your app, you can only include the functions you want and you'll have a smaller built app.


If you're writing an app or a library that needs to run in BOTH the browser, and in node, then you'll need to produce a few different versions of your library: one meant for the browser (as a script tag), one as an es6 module, and one for using in node.



References:

https://www.intricatecloud.io/2020/02/creating-a-simple-npm-library-to-use-in-and-out-of-the-browser/


Monday, January 18, 2021

Difference between Node JS and ES6 modules

The modules used in Node.js follow a module specification known as the CommonJS specification.

The recent updates to the JavaScript programming language, in the form of ES6, specify changes to the language, adding things like new class syntax and a module system. This module system is different from Node.js modules. A module in ES6 looks like the following:


// book.js

const favoriteBook = {

    title: "The Guards",

    author: "Ken Bruen"

}


// a Book class using ES6 class syntax

class Book {

    constructor(title, author) {

        this.title = title;

        this.author = author;

    }


    describeBook() {

        let description = this.title + " by " + this.author + ".";

        return description;

    }

}


// exporting looks different from Node.js but is almost as simple

export {favoriteBook, Book};


To import this module, we'd use the ES6 import functionality, as follows.


/ library.js


// import the book module

import {favoriteBook, Book} from 'book';


// create some books and get their descriptions

let booksILike = [

    new Book("Under The Dome", "Steven King"),

    new Book("Julius Ceasar", "William Shakespeare")

];


console.log("My favorite book is " + favoriteBook + ".");

console.log("I also like " + booksILike[0].describeBook() + " and " + booksILike[1].describeBook());


ES6 modules look almost as simple as the modules we have used in Node.js, but they are incompatible with Node.js modules. This has to do with the way modules are loaded differently between the two formats. If you use a compiler like Babel, you can mix and match module formats. If you intend to code on the server alone with Node.js, however, you can stick to the module format for Node.js which we covered earlier.




References:

https://stackabuse.com/how-to-use-module-exports-in-node-js/Difference between Node JS and ES6 modules 


The modules used in Node.js follow a module specification known as the CommonJS specification.

The recent updates to the JavaScript programming language, in the form of ES6, specify changes to the language, adding things like new class syntax and a module system. This module system is different from Node.js modules. A module in ES6 looks like the following:


// book.js

const favoriteBook = {

    title: "The Guards",

    author: "Ken Bruen"

}


// a Book class using ES6 class syntax

class Book {

    constructor(title, author) {

        this.title = title;

        this.author = author;

    }


    describeBook() {

        let description = this.title + " by " + this.author + ".";

        return description;

    }

}


// exporting looks different from Node.js but is almost as simple

export {favoriteBook, Book};


To import this module, we'd use the ES6 import functionality, as follows.


/ library.js


// import the book module

import {favoriteBook, Book} from 'book';


// create some books and get their descriptions

let booksILike = [

    new Book("Under The Dome", "Steven King"),

    new Book("Julius Ceasar", "William Shakespeare")

];


console.log("My favorite book is " + favoriteBook + ".");

console.log("I also like " + booksILike[0].describeBook() + " and " + booksILike[1].describeBook());


ES6 modules look almost as simple as the modules we have used in Node.js, but they are incompatible with Node.js modules. This has to do with the way modules are loaded differently between the two formats. If you use a compiler like Babel, you can mix and match module formats. If you intend to code on the server alone with Node.js, however, you can stick to the module format for Node.js which we covered earlier.




References:

https://stackabuse.com/how-to-use-module-exports-in-node-js/