Thursday, October 29, 2020

ionic common set of commands



build 

Build web assets and prepare your app for any platform targets



ionic capacitor add

add a native platform to your Ionic project


ionic capacitor add <platform> [options]

Supported one are 

ios, android, or electron



ionic capacitor build

Build an Ionic project for a given platform


ionic capacitor build <platform> [options]



onic capacitor build will do the following:


Perform ionic build

Copy web assets into the specified native platform

Open the IDE for your native project (Xcode for iOS, Android Studio for Android)

Once the web assets and configuration are copied into your native project, you can build your app using the native IDE. Unfortunately, programmatically building the native project is not yet supported.



ionic capacitor copy

Copy web assets to native platforms


ionic capacitor copy [<platform>] [options]


ionic capacitor copy will do the following:


Perform an Ionic build, which compiles web assets

Copy web assets to Capacitor native platform(s)



ionic capacitor open


ionic capacitor open <platform> [options]


ionic capacitor open will do the following:


Open the IDE for your native project (Xcode for iOS, Android Studio for Android)



ionic capacitor run


Run an Ionic project on a connected device


onic capacitor run will do the following:


Perform ionic build (or run the dev server from ionic serve with the --livereload option)

Copy web assets into the specified native platform

Open the IDE for your native project (Xcode for iOS, Android Studio for Android)


When using the --livereload option and need to serve to your LAN, a device, or an emulator, use the --external option also. Otherwise, the web view tries to access localhost.


Once the web assets and configuration are copied into your native project, the app can run on devices and emulators/simulators using the native IDE. Unfortunately, programmatically building and launching the native project is not yet supported.


ionic capacitor run

ionic capacitor run android

ionic capacitor run android -l --external

ionic capacitor run ios --livereload --external

ionic capacitor run ios --livereload-url=http://localhost:8100



ionic capacitor sync


ionic capacitor sync will do the following:

Perform an Ionic build, which compiles web assets

Copy web assets to Capacitor native platform(s)

Update Capacitor native platform(s) and dependencies

Install any discovered Capacitor or Cordova plugins




ionic capacitor update

Update Capacitor native platforms, install Capacitor/Cordova plugins


ionic capacitor update [<platform>] [options]


ionic capacitor update will do the following:


Update Capacitor native platform(s) and dependencies

Install any discovered Capacitor or Cordova plugins



ionic completion

Enables tab-completion for Ionic CLI commands.


ionic completion [options]


This command is experimental and only works for Z shell (zsh) and non-Windows platforms.


To enable completions for the Ionic CLI, you can add the completion code that this command prints to your ~/.zshrc (or any other file loaded with your shell). See the examples.


ionic completion

ionic completion >> ~/.zshrc



ionic config get

This command reads and prints configuration values from the project's ./ionic.config.json file. It can also operate on the global CLI configuration (~/.ionic/config.json) using the --global option.


ionic config get

ionic config get id

ionic config get --global user.email

ionic config get -g npmClient



references

https://ionicframework.com/docs/cli/commands/capacitor-add

What is ionic live reload

When active, Live Reload will reload the browser or Web View when changes in the app are detected. This is particularly useful for developing using hardware devices.


Live Reload is a conflated term. With ionic serve, Live Reload just refers to reloading the browser when changes are made. Live Reload can also be used with Capacitor and Cordova to provide the same experience on virtual and hardware devices, which eliminates the need for re-deploying a native binary.


To use Live Reload with Capacitor, make sure you're either using a virtual device or a hardware device connected to the same Wi-Fi network as your computer. Then, you'll need to specify that you want to use an external address for the dev server using the --external flag.


ionic capacitor run ios -l --external

ionic capacitor run android -l --external



For Android devices, the Ionic CLI will automatically forward the dev server port. This means you can use a localhost address and it will refer to your computer when loaded in the Web View, not the device.


ionic cordova run android -l




references:

https://ionicframework.com/docs/cli/livereload


What are Multi-app Projects

The Ionic CLI supports a multi-app configuration setup, which involves multiple Ionic apps and shared code within a single repository, or monorepo.


In a multi-app project, apps share a single ionic.config.json file at the root of the repository instead of each app having their own. The multi-app config file contains the configuration for each app by nesting configuration objects in a projects object. A default app can be specified using defaultProject.



{

  "defaultProject": "myApp",

  "projects": {

    "myApp": {

      "name": "My App",

      "integrations": {},

      "type": "angular",

      "root": "apps/myApp"

    },

    "myOtherApp": {

      "name": "My Other App",

      "integrations": {},

      "type": "angular",

      "root": "apps/myOtherApp"

    }

  }

}




references:

https://ionicframework.com/docs/cli/configuration


ionic capacitor make builds

Build an Ionic project for a given platform

ionic capacitor build <platform> [options]

ionic capacitor build will do the following:


Perform ionic build

Copy web assets into the specified native platform

Open the IDE for your native project (Xcode for iOS, Android Studio for Android)


Once the web assets and configuration are copied into your native project, you can build your app using the native IDE. Unfortunately, programmatically building the native project is not yet supported.


ionic capacitor build

ionic capacitor build android

ionic capacitor build ios



https://ionicframework.com/docs/cli/commands/capacitor-build


Neo4j - Import Data from a CSV File using Cypher

 You can import data from a CSV (Comma Separated Values) file into a Neo4j database. To do this, use the LOAD CSV clause.


LOAD CSV FROM 'https://www.quackit.com/neo4j/tutorial/genres.csv' AS line

CREATE (:Genre { GenreId: line[0], Name: line[1]})



Import a CSV file containing Headers


The previous CSV file didn't contain any headers. If the CSV file contains headers, you can use WITH HEADERS.


Using this method also allows you to reference each field by their column/header name.


We have another CSV file, this time with headers. This file contains a list of album tracks.


Again, this one's not a large file — it contains a list of 32 tracks, so it will create 32 nodes (and 96 properties).


This file is also stored on Quackit.com, so you can run this code from your Neo4j browser and it should import directly into your database (assuming you are connected to the Internet).


You can also download the file here: tracks.csv



LOAD CSV WITH HEADERS FROM 'https://www.quackit.com/neo4j/tutorial/tracks.csv' AS line

CREATE (:Track { TrackId: line.Id, Name: line.Track, Length: line.Length})



Importing Large Files


If you're going to import a file with a lot of data, the PERODIC COMMIT clause can be handy.


Using PERIODIC COMMIT instructs Neo4j to commit the data after a certain number of rows. This reduces the memory overhead of the transaction state.


The default is 1000 rows, so the data will be committed every thousand rows.


To use PERIODIC COMMIT just insert USING PERIODIC COMMIT at the beginning of the statement (before LOAD CSV)


Here's an example:



USING PERIODIC COMMIT

LOAD CSV WITH HEADERS FROM 'https://www.quackit.com/neo4j/tutorial/tracks.csv' AS line

CREATE (:Track { TrackId: line.Id, Name: line.Track, Length: line.Length})




References:

https://www.quackit.com/neo4j/tutorial/neo4j_import_data_from_csv_file_using_cypher.cfm


Neo4j - Selecting data with MATCH using Cypher

MATCH (p:Person)

WHERE p.Name = "Devin Townsend"

RETURN p


The WHERE clause works the same way as SQL's WHERE clause, in that it allows you to narrow down the results by providing extra criteria.


However, you can achieve the same result without using a WHERE clause. You can also search for a node by providing the same notation you used to create the node.


The following code provides the same results as the above statement:



MATCH (p:Person {Name: "Devin Townsend"})

RETURN p



Relationships


You can also traverse relationships with the MATCH statement. In fact, this is one of the things Neo4j is really good at.


For example, if we wanted to find out which artist released the album called Heavy as a Really Heavy Thing, we could use the following query:



MATCH (a:Artist)-[:RELEASED]->(b:Album)

WHERE b.Name = "Heavy as a Really Heavy Thing" 

RETURN a



ou can see that the pattern we use in the MATCH statement is almost self-explanatory. It matches all artists that released an album that had a name of Heavy as a Really Heavy Thing.


We use variables (i.e. a and b) so that we can refer to them later in the query. We didn't provide any variables for the relationship, as we didn't need to refer to the relationship later in the query.


You might also notice that the first line uses the same pattern that we used to create the relationship in the first place. This highlights the simplicity of the Cypher language. We can use the same patterns in different contexts (i.e. to create data and to retrieve data).



Return all Nodes


You can return all nodes in the database simply by omitting any filtering details. Therefore, the following query will return all nodes in the database:



MATCH (n) RETURN n



Limit the Results


Use LIMIT to limit the number of records in the output. It's a good idea to use this when you're not sure how big the result set is going to be.


So we could simply append LIMIT 5 to the previous statement to limit the output to 5 records:




References:

https://www.quackit.com/neo4j/tutorial/neo4j_select_data_with_match_using_cypher.cfm


Neo4j - Create a Constraint using Cypher

A constraint allows you to place restrictions over the data that can be entered against a node or a relationship.


Constraints help enforce data integrity, because they prevent users from entering the wrong kind of data. If a someone tries to enter the wrong kind of data when a constraint has been applied, they will receive an error message.


Constraint Types


In Neo4j, you can create uniqueness constraints and property existence constraints.


Uniqueness Constraint

Specifies that the property must contain a unique value (i.e. no two nodes with an Artist label can share a value for the Name property.)

Property Existence Constraint

Ensures that a property exists for all nodes with a specific label or for all relationships with a specific type. Property existence constraints are only available in the Neo4j Enterprise Edition.


Create a Uniqueness Constraint


To create a uniqueness constraint in Neo4j, use the CREATE CONSTRAINT ON statement. Like this:



CREATE CONSTRAINT ON (a:Artist) ASSERT a.Name IS UNIQUE

In the above example, we create a uniqueness constraint on the Name property of all nodes with the Artist label.


When the statement succeeds,the following message is displayed:




View the Constraint


Constraints (and indexes) become part of the (optional) database schema.


We can view the constraint we just created by using the :schema command. Like this:


:schema



References:

https://www.quackit.com/neo4j/tutorial/neo4j_create_a_constraint_using_cypher.cfm

Neo4j - Create an Index using Cypher

In Neo4j, you can create an index over a property on any node that has been given a label. Once you create an index, Neo4j will manage it and keep it up to date whenever the database is changed.


To create an index, use the CREATE INDEX ON statement. Like this:


CREATE INDEX ON :Album(Name)


Index Hints


Once an index has been created, it will automatically be used when you perform relevant queries.


However, Neo4j also allows you to enforce one or more indexes with a hint. You can create an index hint by including USING INDEX ... in your query.


So we could enforce the above index as follows:



MATCH (a:Album {Name: "Somewhere in Time"}) 

USING INDEX a:Album(Name) 

RETURN a




References:

https://www.quackit.com/neo4j/tutorial/neo4j_create_an_index_using_cypher.cfm


Neo4j - Create a Relationship using Cypher

 MATCH (a:Artist),(b:Album)

WHERE a.Name = "Strapping Young Lad" AND b.Name = "Heavy as a Really Heavy Thing"

CREATE (a)-[r:RELEASED]->(b)

RETURN r



First, we use a MATCH statement to find the two nodes that we want to create the relationship between.


There could be many nodes with an Artist or Album label so we narrow it down to just those nodes we're interested in. In this case, we use a property value to filter it down. We use the Name property that we'd previously assigned to each node.


Then there's the actual CREATE statement. This is what creates the relationship. In this case, it references the two nodes by the variable name (i.e. a and b) that we gave them in the first line. The relationship is established by using an ASCII-code pattern, with an arrow indicating the direction of the relationship: (a)-[r:RELEASED]->(b).


We give the relationship a variable name of r and give the relationship a type of RELEASED (as in "this band released this album"). The relationship's type is analogous to a node's label.



The above example is a very simple example of a relationship. One of the things that Neo4j is really good at, is handling many interconnected relationships.


Let's build on the relationship that we just established, so that we can see how easy it is to continue creating more nodes and relationships between them. So we will create one more node and add two more relationships.


We'll end up with the following graph:



MATCH (a:Artist),(b:Album),(p:Person)

WHERE a.Name = "Strapping Young Lad" AND b.Name = "Heavy as a Really Heavy Thing" AND p.Name = "Devin Townsend" 

CREATE (p)-[pr:PRODUCED]->(b), (p)-[pf:PERFORMED_ON]->(b), (p)-[pl:PLAYS_IN]->(a)

RETURN a,b,p




References:

https://www.quackit.com/neo4j/tutorial/neo4j_create_a_relationship_using_cypher.cfm

Neo4j - Create a Node using Cypher

CREATE (a:Artist { Name : "Strapping Young Lad" })

This Cypher statement creates a node with an Artist label. The node has a property called Name, and the value of that property is Strapping Young Lad.

The a prefix is a variable name that we provide. We could've called this anything. This variable can be useful if we need to refer to it later in the statement (which we don't in this particular case). Note that a variable is restricted to a single statement.


Displaying the Node

The CREATE statement creates the node but it doesn't display the node.

To display the node, you need to follow it up with a RETURN statement.

Let's create another node. This time it will be the name of an album. But this time we'll follow it up with a RETURN statement.


CREATE (b:Album { Name : "Heavy as a Really Heavy Thing", Released : "1995" })

RETURN b

Creating Multiple Nodes

You can create multiple nodes at once by separating each node with a comma:

CREATE (a:Album { Name: "Killers"}), (b:Album { Name: "Fear of the Dark"}) 

RETURN a,b


References:

https://www.quackit.com/neo4j/tutorial/neo4j_create_a_node_using_cypher.cfm

  



Neo4j Query Language - Cypher

MATCH (p:Person { name:"Homer Flinstone" })

RETURN p


This Cypher statement returns a "Person" node where the name property is "Homer Flinstone".



Neo4j doesn't store its data in tables like the relational database model. It's all in nodes and relationships. So the Cypher query above is querying nodes, their labels, and their properties. The SQL example on the other hand, is querying tables, rows, and columns.



Neo4j is a NoSQL DBMS, in that it doesn't use the relational model and it doesn't use SQL.



ASCII-Art Syntax


Cypher uses ASCII-Art to represent patterns. This is a handy thing to remember when first learning the language. If you forget how to write something, just visualise how the graph will look and it should help.


(a)-[:KNOWS]->(b)


Nodes are represented by parentheses, which look like circles. Like this: (node)

Relationships are represented by arrows. Like this: ->

Information about a relationship can be inserted between square brackets. Like this: [:KNOWS]



Defining the Data

Nodes usually have labels. Examples could include "Person", "User", "Actor", "Employee", "Customer".

Nodes usually have properties. Properties provide extra information about the node. Examples could include "Name", "Age", "Born", etc

Relationships can also have properties.

Relationships usually have a type (this is basically like a node's label). Examples could include "KNOWS", "LIKES", "WORKS_FOR", "PURCHASED", etc.



So looking at the above example again:



MATCH (p:Person { name:"Homer Flinstone" })

RETURN p

We can see that:


The node is surrounded by parentheses ().

Person is the node's label.

name is a property of the node.




References:

https://www.quackit.com/neo4j/tutorial/neo4j_query_language_cypher.cfm




Texture Mapping - three js

 The geometry defines the mesh’s shape, and the material defines various surface properties of the mesh, in particular, how it reacts to light. The geometry and the material, along with any light and shadows affecting the mesh, control the appearance of the mesh when we render the scene.


References 

https://discoverthreejs.com/book/first-steps/textures-intro/

 

Javascript - writing generic function for search

We can achieve this using helper function 


function findInDict(dict, predicate) 

{

    for (var prop in dict) {

        if (dict.hasOwnProperty(prop) && predicate(dict[prop], prop, dict)) {

            return prop;

        }

    }

    return null;

  };

}


You can use this helper function:


function findInDict(dict, predicate) 

{

    for (var prop in dict) {

        if (dict.hasOwnProperty(prop) && predicate(dict[prop], prop, dict)) {

            return prop;

        }

    }

    return null;

  };

}

Then perform the search:


// find property on main dictionary

var prop = findInDict(dict, function(value, name) {

    // name is property name

    // value is property value

    return value[0].socketid == 'bVLmrV8I9JsSyON7AAAA';

});


if (prop === null) {

    // prop doesn't exist, so create one?

}



References:

https://stackoverflow.com/questions/24605893/searching-an-id-in-an-array-of-dictionary-in-javascript


Sails Asset folder - Something that did not know!

 Assets refer to static files (js, css, images, etc.) on your server that you want to make accessible to the outside world. In Sails, these files are placed in the assets/ folder. When you lift your app, add files to your assets/ folder, or change existing assets, Sails' built-in asset pipeline processes and syncs those files to a hidden folder (.tmp/public/).


This intermediate step (moving files from assets/ into .tmp/public/) allows Sails to pre-process assets for use on the client - things like LESS, CoffeeScript, SASS, spritesheets, Jade templates, etc.


The contents of this .tmp/public folder are what Sails actually serves at runtime. This is roughly equivalent to the "public" folder in express, or the www/ folder you might be familiar with from other web servers like Apache.


Static middleware

#

Behind the scenes, Sails uses the serve-static middleware from Express to serve your assets. You can configure this middleware (e.g. to change cache settings) in /config/http.js.


index.html

#

Like most web servers, Sails honors the index.html convention. For instance, if you create assets/foo.html in a new Sails project, it will be accessible at http://localhost:1337/foo.html. But if you create assets/foo/index.html, it will be available at both http://localhost:1337/foo/index.html and http://localhost:1337/foo.


Precedence

#

It is important to note that the static middleware is installed after the Sails router. So if you define a custom route, but also have a file in your assets directory with a conflicting path, the custom route will intercept the request before it reaches the static middleware. For example, if you create assets/index.html, with no routes defined in your config/routes.js file, it will be served as your home page. But if you define a custom route, '/': 'FooController.bar', that route will take precedence.




References:

https://sailsjs.com/documentation/concepts/assets


Tuesday, October 27, 2020

What is Network Intelligence

Network Intelligence is a capacity planning system used by Communication Service Providers (CSPs) for communications networks. It reads data from your network inventory systems or other network data sources and processes that data to create useful views of your network data. Network Intelligence allows network planners to view network reports and maps, plan network build, and identify potential capacity stress points within the network in advance. After loading inventory data in Network Intelligence, you manage your network assets, forecast traffic demand, plan network capacity, optimize network traffic, and perform network cost reduction and consolidation.


You can load inventory data whenever you require, and it can also be done on a predefined schedule. When you first deploy Network Intelligence, you load all the data. Thereafter, you typically run automated loads at regular intervals (for example, every night during network downtime) that load only new data or changes to existing data, such as updates or deletions.


Network Intelligence can load and process cost data for network components that you can use to create costing plans for proposed trail routing.Within Network Intelligence, the data capture platform is displayed and the principal functional areas are listed, including capacity forecasting and trail routing. Data outputs include capacity planning, financial modeling, and reporting modules.


The core module provides visualization, utilization, and trending reporting on all network entity types from topologies, networks, equipment, equipment holders, and links, down to individual cards and trails. By measuring and reporting the utilization of all network elements, and examining existing capacity utilization black spots, the core module suggests the re-routing of inefficient trails, and generates comprehensive outage information.

About Service Demand Forecasting

Forecasting is the creation of optimal network build plans by network planning engineers. Network Intelligence analyzes projected service demands versus the current network capacity. The resultant forecast is defined as a collection of service demands with expected future trail growth counts for one, or more, future time periods. Each individual network point-to-point service demand consists of a route with a quantity of trails that require routing. A service demand can also have a customer. The service demands are used to automatically configure routes for multiple trails over the existing, and new planned network. The outputs of the service demand are:


Network build plan: itemizing new build requirements

Network financial budget: itemizing cost line items

Network impact: itemizing the effect of the forecast plan on network capacity

Forecasting with Network Intelligence offers several benefits:

Providing operators with the confidence to predict, and minimize, network investment requirements.

Estimating future sales forecasts by internal business units, or sales forecasts derived by Network Intelligence.

Calculating network point-to-point service demands from the plan. Individual route rules can be adjusted, and the plan incrementally rerun.

Using service demands to configure proposed routes over existing, or planned network topologies.


Calculating the amount of new network build required, and generating the cost. New network is built only when, and where, it is needed. The cost of the new build required for a sales bid can be determined. Network capacity for the bid can be reserved.


Creating a network investment blueprint to support the business in the future. Services or individual routes can be prioritized, to ensure that premium services and customers are catered for first.

Providing exhaustion analysis (trending) after implementation of planned demands. Forecasts can be continually compared to the actual take-up in the network to help improve future plan accuracy.

references:

https://docs.oracle.com/cd/E18457_01/doc.722/e17891/net_intel_concepts_intro.htm#autoId0

Monday, October 26, 2020

Getting started with Ionic

Can either do via app Wizard or CLI 

Install Node.js, then install the latest Ionic command-line tools in your terminal. Follow the Android and iOS platform guides to install their required tools.

npm install -g @ionic/cli

Start an app

Create an app using one of our ready-made app templates, or a blank one to start fresh

ionic start myApp tabs

Run your app

Much of your app can be built in the browser with ionic serve. We suggest starting with this workflow.

cd myApp

ionic serve

references:

https://ionicframework.com/getting-started


What is Multilingual WordPress Website?

A multilingual WordPress website serves the same content in multiple languages. It can automatically redirect users to a language based on their region, or users can select their preferred language using a dropdown link.

There are few different approaches used to create a multilingual website

The first approach allows you to manually translate all the content into languages of your choice with the help of human translators.

The second method does not actually create a multilingual site but uses machine translations of your existing content by using auto-translate services.

However, Google Translate has stopped supporting new accounts for website translation. The other options are either not-free or not very good in quality.

It goes without saying that manually translating your content is a much better approach. This allows you to maintain quality throughout your website. You can translate the content yourself or hire professionals to do that.

references

https://www.wpbeginner.com/beginners-guide/how-to-easily-create-a-multilingual-wordpress-site/

What is Ionic-React WooCommerce

Below are the objectives

This ensures high sales, and in time delivery of the products.

This reduces the time of transaction and payment.

It allows visitors across certain boundaries to see and order the products.

The ease of ordering and payment enhances the conversion rate.

Extended options and ease throughout the transaction render more confidence and satisfaction to the customer.

With just one single compilation you can create mobile apps for your WooCommerce website and publish the Apps on play store and Appstore. It accommodates massive number of very famous and essential plugins like


WPML (WordPress Multi Language)

WordPress Multi-Vendor and Dokak

WC Points & Rewards

Multi-Currency

After ship

One Signal

You can create an app for any type of store. Ionic-React WooCommerce supports all types of products.

As far as the payment gateways is a major consideration, you can integrate all gateways that are compatible with WordPress WooCommerce due to the web view feature facility.

When it comes to shipping gateways, it allows all the shipping methods.

Social login option through Facebook following just a few simple steps.

Geo fencing is another highly demanded and dominating feature that can be activated by just putting the latitude and longitude readings of the location.

references:

https://codecanyon.net/item/ionic-react-woocommerce-universal-full-mobile-app-solution-for-ios-android-wordpress-plugins/27863587



What is One Signal

OneSignal is the fastest and most most reliable service to send push notifications, in-app messages, and emails to your users on mobile and web, including content management platforms like Wordpress and Shopify. In our documentation you can discover resources and training to implement OneSignal’s SDKs, learn how to leverage OneSignal’s powerful API, and find best practices for sending messages to increase your user engagement.


Supports 


Web Push

Mobile Push

Email

In-App Messages

Integration 

RESET API 


references:

https://documentation.onesignal.com/docs


What is Capacitor JS

Capacitor is a cross-platform native runtime that makes it easy to build web apps that run natively on iOS, Android, and the web. Representing the next evolution of Hybrid apps, it provides a modern native container approach for teams who want to build web-first apps with full access to native SDKs when they need it.


Introduction

Capacitor provides a consistent, web-focused set of APIs that enable an app to stay as close to web standards as possible, while accessing rich native device features on platforms that support them. Adding native functionality is easy with a simple Plugin API for Swift on iOS, Java on Android, and JavaScript for the web.


Capacitor is a spiritual successor to Apache Cordova and Adobe PhoneGap, with inspiration from other popular cross-platform tools like React Native and Turbolinks, but focused entirely on enabling modern web apps to run on all major platforms with ease. Capacitor is backward-compatible with many existing Cordova plugins.




references:

https://capacitorjs.com/docs


Gmail from node mailer

Below are the steps

1. Google Cloud Platform Setup

Go to Google Cloud and create a new project.

Search for “APIs & Services”

Click on “Credentials” > Click “+ Create credentials” > “OAuth client ID”

Type: Web Application

Name: “Enter Your Name of Client”

Authorized redirect URIs: https://developers.google.com/oauthplayground

Copy both the Client ID and Client Secret in a note.


2. Google OAuth Playground


Go to Oauth Playground > Click on Setting icon on the right > Enable Use your own Oauth credentials > Enter Oauth Client ID & Oatuh Client Secret that you get from the above step > Close


In Select & Authorize APIs, Type https://mail.google.com > Authorize APIs > Login with the account that you want to send from.

Click Exchange authorization code for tokens > Copy Refresh Token


Now, you should have 3 things in your note:


Client ID

Client Secret

Oauth2 Refresh Token



Important: No need to configure Gmail account

While many tutorials told you to turn ON “Less secure app access” to allow Nodemailer to have access to send emails from your account.


Now below is the coding part 


In this example we are going to use aws lambda function to test our function but it shouldn't be different from any other use case except the way you are going to call your function.



The first thing to do is create a NPM project, So go to ur terminal and run :

 npm init 

Now we need to install nodemailer :

npm install nodemailer

Create a Javascript file let's call it index.js and copy and paste this code :


var nodemailer = require('nodemailer');


exports.handler = (event, context, callback) => {

let transporter = nodemailer.createTransport({

host: 'smtp.gmail.com',

port: 465,

secure: true,

auth: {

    type: 'OAuth2',

    user: 'mail@gmail.com',

    clientId: '****************************.apps.googleusercontent.com',

    clientSecret: '*************************',

    refreshToken: '***************************************************************************************',

    accessToken: '**************************************************************************************************',

    expires: 3599

}});

console.log(`Processing event ${JSON.stringify(event)}`);

// create reusable transporter object using the default SMTP transport


const requestBody = JSON.parse(event.body);


console.log('sending from ', process.env.MAILUSER);

console.log('message sender : ', JSON.stringify(requestBody.email));


// send mail to me with the message info

let info1 =  transporter.sendMail({

    from: `sender@gmail.com`, // sender address

    to: "receiver@gmail.com", // list of receivers

    subject: "Insert a Subject here", // Subject line

    text: `message text here`, // plain text body

    html: `<p>Html text message here </p>` // html body

});


var responseBody = {

    "result":"message sent successfully"

};


var response = {

    "statusCode": 200,

    "headers": {

        'Content-Type': 'application/json',

        'Access-Control-Allow-Origin': '*',

        'Access-Control-Allow-Methods': 'POST'

    },

    "body": JSON.stringify(responseBody),

    "isBase64Encoded": false

};

callback(null, response);

}





reference:

http://blog.bessam.engineer/How-to-use-nodemailer-with-GMail-and-OAuth


Sails Model Settings

In Sails, the top-level properties of model definitions are called model settings. This includes everything from attribute definitions, to the database settings the model will use, as well as a few other options.


To modify the default model settings shared by all of the models in your app, edit config/models.js.


a lot of things we can set using this model settings including the createdAt, updatedAt values.


references:

https://sailsjs.com/documentation/concepts/models-and-orm/model-settings

Sunday, October 25, 2020

Streaming data with XHR

In some cases an application may need or want to process a stream of data incrementally: upload the data to the server as it becomes available on the client, or process the downloaded data as it arrives from the server. Unfortunately, while this is an important use case, today there is no simple, efficient, cross-browser API for XHR streaming:


The send method expects the full payload in case of uploads.


The response, responseText, and responseXML attributes are not designed for streaming.


Streaming has never been an official use case within the official XHR specification. As a result, short of manually splitting an upload into smaller, individual XHRs, there is no API for streaming data from client to server. Similarly, while the XHR2 specification does provide some ability to read a partial response from the server, the implementation is inefficient and very limited. That’s the bad news.



The good news is that there is hope on the horizon! Lack of streaming support as a first-class use case for XHR is a well-recognized limitation, and there is work in progress to address the problem:


Web applications should have the ability to acquire and manipulate data in a wide variety of forms, including as a sequence of data made available over time. This specification defines the basic representation for Streams, errors raised by Streams, and programmatic ways to read and create Streams.



The combination of XHR and Streams API will enable efficient XHR streaming in the browser. However, the Streams API is still under active discussion, and is not yet available in any browser. So, with that, we’re stuck, right? Well, not quite. As we noted earlier, streaming uploads with XHR is not an option, but we do have limited support for streaming downloads with XHR:



var xhr = new XMLHttpRequest();

xhr.open('GET', '/stream');

xhr.seenBytes = 0;


xhr.onreadystatechange = function() { 

  if(xhr.readyState > 2) {

    var newData = xhr.responseText.substr(xhr.seenBytes); 

    // process newData


    xhr.seenBytes = xhr.responseText.length; 

  }

};


xhr.send();



Subscribe to state and progress notifications

Extract new data from partial response

Update processed byte offset




References:

https://hpbn.co/xmlhttprequest/

Saturday, October 24, 2020

React-Native How to setup Navigation

 React Navigation is made up of some core utilities and those are then used by navigators to create the navigation structure in your app


The libraries we will install now are react-native-gesture-handler, react-native-reanimated, react-native-screens and react-native-safe-area-context and @react-native-community/masked-view. 


yarn add @react-navigation/native


yarn add react-native-reanimated react-native-gesture-handler react-native-screens react-native-safe-area-context @react-native-community/masked-view


If you're on a Mac and developing for iOS, you need to install the pods (via Cocoapods) to complete the linking.



npx pod-install ios



To finalize installation of react-native-gesture-handler, add the following at the top (make sure it's at the top and there's nothing else before it) of your entry file, such as index.js or App.js:


import 'react-native-gesture-handler';

import * as React from 'react';

import { NavigationContainer } from '@react-navigation/native';


export default function App() {

  return (

    <NavigationContainer>{/* Rest of your app code */}</NavigationContainer>

  );

}



#


references:

https://reactnavigation.org/docs/getting-started

What does 'this' keyword in Javascript

A function's this keyword behaves a little differently in JavaScript compared to other languages. It also has some differences between strict mode and non-strict mode.

In most cases, the value of this is determined by how a function is called (runtime binding). It can't be set by assignment during execution, and it may be different each time the function is called. 


ES5 introduced the bind() method to set the value of a function's this regardless of how it's called, and ES2015 introduced arrow functions which don't provide their own this binding (it retains the this value of the enclosing lexical context).


A property of an execution context (global, function or eval) that, in non–strict mode, is always a reference to an object and in strict mode can be any value.


Global context

In the global execution context (outside of any function), this refers to the global object whether in strict mode or not.


Function context

Inside a function, the value of this depends on how the function is called.


Since the following code is not in strict mode, and because the value of this is not set by the call, this will default to the global object, which is window in a browser.


Class context

The behavior of this in classes and functions is similar, since classes are functions under the hood. But there are some differences and caveats.


Derived classes

Unlike base class constructors, derived constructors have no initial this binding. Calling  super() creates a this binding within the constructor and essentially has the effect of evaluating the following line of code, where Base is the inherited class:





References:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this

React-Native - easy steps to create an App

react-native init YourProjectName

cd YourProjectName

react-native run-ios //for iOS


To reset the cache, do this below 

npm start -- --reset-cache 


Now trying to run on android, gave the below error 


A problem occurred configuring project ':react-native-reanimated'.

> SDK location not found. Define location with an ANDROID_SDK_ROOT environment variable or by setting the sdk.dir path in your project's local properties file at <path>.


this can be easily solved by creating a local.properties file and entering the 


sdk.dir = <Android SDK dir>


This may need to be full and not like ~/Library. Instead /Users/<username>/Library/Android/sdk 


Now meed tp set the android home directory also, which is like below. 


export ANDROID_HOME=~/Library/Android/sdk


Once this is set, good to go. it was running on both Android and iOS 



references:

https://stackoverflow.com/questions/44270504/react-native-ios-and-android-folders-not-present#:~:text=android%20and%20ios%20folder%20are,create%20android%20and%20ios%20folder.&text=This%20worked%20for%20me%3A%20Go,both%20environments%20%2D%20User%20and%20Workspace.


What is FlipperKit

 Flipper (formerly Sonar) is a platform for debugging mobile apps on iOS and Android. Visualize, inspect, and control your apps from a simple desktop interface. Use Flipper as is or extend it using the plugin API.

Flipper aims to be your number one companion for mobile app development on iOS and Android. Therefore, we provide a bunch of useful tools, including a log viewer, interactive layout inspector, and network inspector.

Flipper is built as a platform. In addition to using the tools already included, you can create your own plugins to visualize and debug data from your mobile apps. Flipper takes care of sending data back and forth, calling functions, and listening for events on the mobile app.

references

https://cocoapods.org/pods/FlipperKit


React-Native - should use expo or not?



What is Expo?

Expo is a framework that is used to build React Native apps. It is basically a bundle with tools and services built for React Native, that will help you get started with building React Native apps with ease.

So there are two ways to build React Native apps. One using expo, and the other just using plain React Native, without Expo.




Here are some good reasons to use Expo to build your React Native app.

1. Fastest way to build React Native Apps

If you are given a project that needs rapid development, and you have picked React Native to build the cross-platform app, Expo is the best fit for you. With Expo, you can build and deploy React Native apps for both iOS and Android with ease.

2. You don’t need to know Native Mobile coding

With Expo, you will never touch any native iOS or native Android code. This means, you don’t need developers to know any native mobile coding while building apps using Expo. Expo handles all the native code under the hood, and this is not available to the developers who are using it.

Whereas, if you build React Native apps from scratch without Expo, based on what I have seen you will need to use a tiny bit of your native mobile coding skills. Although 95% of the code is shared between iOS and Android, depending on your use-case, you still may have to tweak some native code on both platforms. For anyone who wants to avoid getting into that layer, Expo is the best fit.

3. No Xcode, No Android Studio

If you don’t need to do any native mobile coding using Swift (iOS) or Java (Android), it means you don’t have to ever use tools like Xcode or Android Studio. My least favorite part of being a React Native developer is to deal with all the problems I face while I use both Xcode and Android Studio.

With Expo, there is no need to ever worry about these tools, since you would never need them.

4. Publish Over The Air (OTA) Updates Instantly

A really nice feature that you can utilize while using Expo is Over The Air (OTA) updates. Since all of the code you write to build your app using Expo, is in JavaScript, you can push updates to your app anytime over the air. You would not need app store approvals to do so. The biggest pain point in mobile app development, is going through the app stores to publish updates to your apps.

With Expo, you can quickly publish updates to your app over the app, with ease. There is no native code, hence you don’t need to go to the app store to publish every single update. This is a great feature that comes in handy for rapid development and testing cycles.

5. In-built access to Native APIs

Expo comes with a lot of native APIs out of the box for both iOS and Android. This makes the developers job of adding native features to the app fairly easy. Some of the common native features provided by Expo are, Camera, file system, location, social authentication, push notifications, to name a few. While using Expo, you don’t need to worry about integrating these native features, since they are available to you as a part of the Expo bundle.

The entire list of available APIs with the Expo SDK is linked below:


Why Not Expo?


1. If you are a native mobile developer, or have native developers on the team

The biggest perk of using Expo, is that you don’t have to touch native code. But if you are a native mobile developer or have native mobile developers on your team that is working on building a React Native app, do not use Expo. Because, you won’t be utilizing the native coding expertise while using Expo. This could be frustrating for a developer with native background.

Instead, build React Native apps without Expo. You can build apps using the plain React Native CLI. With this you can always tweak or customize some native components and APIs as needed.

2. Adds another layer of abstraction — Too much magic

Some developers don’t like that Expo adds another layer of abstraction to the already abstracted layer. React Native itself has a layer of abstraction and it is essentially a wrapper around the native iOS and Android components. Now adding another layer of abstraction with Expo, is not something some developers maybe comfortable with. This means, there is a lot of stuff happening under the hood, that we don’t have control over. In my perspective, this is not a real problem. If the ultimate goal is to build a solid React Native app, the magic is actually a good thing. But this is totally subjective.

3. Not all iOS and Android APIs are available yet

The official Expo documentation states that not all iOS and Android APIs are available yet. Features like bluetooth, In-App purchases are not available with Expo yet. Make sure to read to verify all the features you are trying to build are available with Expo, before you decide one way or other.

4. Can’t pick your choice of Push Notification service

With Expo, you don’t get to pick the push notification service like Firebase. You have to go with what comes out of the box, which is One Signal. If this is something that could be a problem to you, then Expo is not for you.




references:

https://medium.com/@adhithiravi/building-react-native-apps-expo-or-not-d49770d1f5b8


Why android and iOS folders are not seen when developing with react-native expo?

One of the points of Expo on top of React Native is that you don't go down to android or ios code. Expo deals with those folders for you, you don't need to interact with them. Is there a reason you need those folders? if so, you will have to eject.

npm run eject 

this will eject. however it removes the app from the Expo framework, which adds a lot of nice benefits and abstraction from the Android/iOS code.

if you want to develop app with ReactNative you start follow this : Getting Started use React Native.

If you create project with ReactNative just write on your terminal like:

react-native init YourProjectName

cd YourProjectName

react-native run-ios //for iOS



npm run eject 


> eject

> expo eject


WARNING: expo-cli has not yet been tested against Node.js v15.0.1.

If you encounter any issues, please report them to https://github.com/expo/expo-cli/issues


expo-cli supports following Node.js versions:

* >=10.13.0 <11.0.0 (Maintenance LTS)

* >=12.13.0 <13.0.0 (Active LTS)

* >=14.0.0  <15.0.0 (Current Release)


Your git working tree is clean

To revert the changes after this command completes, you can run the following:

  git clean --force && git reset --hard


📝  Android package Learn more: https://expo.fyi/android-package


? What would you like your Android package name to be? 




references:

https://stackoverflow.com/questions/44270504/react-native-ios-and-android-folders-not-present#:~:text=android%20and%20ios%20folder%20are,create%20android%20and%20ios%20folder.&text=This%20worked%20for%20me%3A%20Go,both%20environments%20%2D%20User%20and%20Workspace.


Thursday, October 22, 2020

How to format the XL file with headers formatted

from openpyxl.utils.dataframe import dataframe_to_rows

wb = Workbook()

ws = wb.active


for r in dataframe_to_rows(df, index=True, header=True):

    ws.append(r)

wb = Workbook()

ws = wb.active


for r in dataframe_to_rows(df, index=True, header=True):

    ws.append(r)


for cell in ws['A'] + ws[1]:

    cell.style = 'Pandas'


wb.save("pandas_openpyxl.xlsx")


References:

https://stackoverflow.com/questions/55041209/openpyxl-mark-row-as-heading




Dataframe how to change col Order 


column_names = ["C", "A", "B"]

df = df.reindex(columns=column_names)



References:

https://www.kite.com/python/answers/how-to-reorder-columns-in-a-pandas-dataframe-in-python


What is Los Alamos System

Los Alamos National Laboratory's mission is to solve national security challenges through scientific excellence.[25] The laboratory's strategic plan reflects U.S. priorities spanning nuclear security, intelligence, defense, emergency response, nonproliferation, counterterrorism, energy security, emerging threats, and environmental management. This strategy is aligned with priorities set by the Department of Energy (DOE), the National Nuclear Security Administration (NNSA), and national strategy guidance documents, such as the Nuclear Posture Review, the National Security Strategy, and the Blueprint for a Secure Energy Future


Los Alamos is the senior laboratory in the DOE system, and executes work in all areas of the DOE mission: national security, science, energy, and environmental management.[26] The laboratory also performs work for the Department of Defense (DoD), Intelligence Community (IC), and Department of Homeland Security (DHS), among others. The laboratory's multidisciplinary scientific capabilities and activities are organized into four Science Pillars:[27]


The Information, Science, and Technology Pillar leverages advances in theory, algorithms, and the exponential growth of high-performance computing to accelerate the integrative and predictive capability of the scientific method.

The Materials for the Future Pillar seeks to optimize materials for national security applications by predicting and controlling their performance and functionality through discovery science and engineering.

The Nuclear and Particle Futures Pillar applies science and technology to intransigent problems of system identification and characterization in areas of global security, nuclear defense, energy, and health.

The Science of Signatures Pillar integrates nuclear experiments, theory, and simulation to understand and engineer complex nuclear phenomena.



References:

https://en.wikipedia.org/wiki/Los_Alamos_National_Laboratory#:~:text=Los%20Alamos%20is%20the%20senior,%2C%20energy%2C%20and%20environmental%20management.&text=The%20Science%20of%20Signatures%20Pillar,and%20engineer%20complex%20nuclear%20phenomena.


What is Pandas Series and Dataframe

Series is a type of list in pandas which can take integer values, string values, double values and more. ... Series can only contain single list with index, whereas dataframe can be made of more than one series or we can say that a dataframe is a collection of series that can be used to analyse the data.

References:

https://www.geeksforgeeks.org/creating-a-dataframe-from-pandas-series/#:~:text=Series%20is%20a%20type%20of,values%2C%20double%20values%20and%20more.&text=Series%20can%20only%20contain%20single,used%20to%20analyse%20the%20data.


Python Unicode HowTo

>>> unicode('abcdef')

u'abcdef'

>>> s = unicode('abcdef')

>>> type(s)

<type 'unicode'>

>>> unicode('abcdef' + chr(255))    

Traceback (most recent call last):

...

UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 6:

ordinal not in range(128)

This is a classic python unicode pain point! Consider the following:


a = u'bats\u00E0'

print a

 => batsà

All good so far, but if we call str(a), let's see what happens:

str(a)

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

UnicodeEncodeError: 'ascii' codec can't encode character u'\xe0' in position 4: ordinal not in range(128)

Oh dip, that's not gonna do anyone any good! To fix the error, encode the bytes explicitly with .encode and tell python what codec to use:


a.encode('utf-8')

 => 'bats\xc3\xa0'

print a.encode('utf-8')

 => batsà

The issue is that when you call str(), python uses the default character encoding to try and encode the bytes you gave it, which in your case are sometimes representations of unicode characters. To fix the problem, you have to tell python how to deal with the string you give it by using .encode('whatever_unicode'). Most of the time, you should be fine using utf-8.

I found elegant work around for me to remove symbols and continue to keep string as string in follows:

yourstring = yourstring.encode('ascii', 'ignore').decode('ascii')

It's important to notice that using the ignore option is dangerous because it silently drops any unicode(and internationalization) support from the code that uses it, as seen here (convert unicode):

>>> u'City: Malmö'.encode('ascii', 'ignore').decode('ascii')

'City: Malm'

For utf-8, it's sufficient to do: yourstring = yourstring.encode('utf-8', 'ignore').decode('utf-8') 


References:

https://docs.python.org/2.7/howto/unicode.html


Docker useful commands

To get the size of docker image

docker images

By default, if you run docker images you will get the size of each image. However, if you run docker ps you would not get the size of the running containers.

docker ps

To check the size of each running container what you could do is just use the --size argument of docker ps command.

docker ps --size


CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES               SIZE

d64a8112d00a        nginx               "nginx -g 'daemon of…"   About an hour ago   Up About an hour    0.0.0.0 :80->80/tcp   nginx               12MB (virtual 127MB)

ab81be326c31        eboraas/laravel     "/usr/sbin/apache2ct…"   2 hours ago         Up 2 hours          80/tcp, 443/tcp      laravel             92MB (virtual 509MB)

That way you will be able to quickly tell which container exactly is consuming the most disk space.


References:

https://www.digitalocean.com/community/questions/how-to-check-the-disk-usage-of-all-running-docker-containers


Dockerizing Django Application

 Docker takes all the great aspects of a traditional virtual machine, e.g. a self-contained system isolated from your development machine and removes many of the drawbacks such as system resource drain, setup time, and maintenance.

Put simply, Docker gives you the ability to run your applications within a controlled environment, known as a container, built according to the instructions you define. A container leverages your machine’s resources much like a traditional virtual machine (VM). However, containers differ greatly from traditional virtual machines in terms of system resources.

Docker doesn’t require the often time-consuming process of installing an entire OS to a virtual machine such as VirtualBox or VMWare.

You create a container with a few commands and then execute your applications on it via the Dockerfile.

Docker manages the majority of the operating system virtualization for you, so you can get on with writing applications and shipping them as you require in the container you have built.

Dockerfiles can be shared for others to build containers and extend the instructions within them by basing their container image on top of an existing one.

The containers are also highly portable and will run in the same manner regardless of the host OS they are executed on. Portability is a massive plus side of Docker.

References:

https://semaphoreci.com/community/tutorials/dockerizing-a-python-django-web-application


Travelling Salesman problem

The travelling salesman problem (also called the travelling salesperson problem[1] or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.

The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP.


References:

https://en.wikipedia.org/wiki/Travelling_salesman_problem


How to use neo4j with Python

Neo4j can be installed on any system and then accessed via its binary and HTTP APIs.

You can use the official binary driver for Python (neo4j-python-driver) or connect via HTTP with any of our community drivers.


Neo4j Python Driver

The Neo4j Python driver is officially supported by Neo4j and connects to the database using the binary protocol. It aims to be minimal, while being idiomatic to Python.

pip install neo4j

class HelloWorldExample:


    def __init__(self, uri, user, password):

        self.driver = GraphDatabase.driver(uri, auth=(user, password))


    def close(self):

        self.driver.close()


    def print_greeting(self, message):

        with self.driver.session() as session:

            greeting = session.write_transaction(self._create_and_return_greeting, message)

            print(greeting)


    @staticmethod

    def _create_and_return_greeting(tx, message):

        result = tx.run("CREATE (a:Greeting) "

                        "SET a.message = $message "

                        "RETURN a.message + ', from node ' + id(a)", message=message)

        return result.single()[0]



if __name__ == "__main__":

    greeter = HelloWorldExample("bolt://localhost:7687", "neo4j", "password")

    greeter.print_greeting("hello, world")

    greeter.close()



Py2neo


Py2neo is a client library and comprehensive toolkit for working with Neo4j from within Python applications and from the command line. It has been carefully designed to be easy and intuitive to use




References:

https://neo4j.com/developer/python/

What is Cypher query language

Cypher is Neo4j’s graph query language that allows users to store and retrieve data from the graph database. Neo4j wanted to make querying graph data easy to learn, understand, and use for everyone, but also incorporate the power and functionality of other standard data access languages. This is what Cypher aims to accomplish.


Cypher’s syntax provides a visual and logical way to match patterns of nodes and relationships in the graph. It is a declarative, SQL-inspired language for describing visual patterns in graphs using ASCII-Art syntax. It allows us to state what we want to select, insert, update, or delete from our graph data without a description of exactly how to do it. Through Cypher, users can construct expressive and efficient queries to handle needed create, read, update, and delete functionality.


Cypher is not only the best way to interact with data and Neo4j - it is also open source! The openCypher project provides an open language specification, technical compatibility kit, and reference implementation of the parser, planner, and runtime for Cypher. It is backed by several companies in the database industry and allows implementors of databases and clients to freely benefit from, use, and contribute to the development of the openCypher language.


https://neo4j.com/developer/cypher/


Monday, October 19, 2020

What is Bezier curve

 Bezier curves are used in computer graphics to draw shapes, for CSS animation and in many other places.

They are a very simple thing, worth to study once and then feel comfortable in the world of vector graphics and advanced animations.

A bezier curve is defined by control points.

There may be 2, 3, 4 or more.

Points are not always on curve. That’s perfectly normal, later we’ll see how the curve is built.

The curve order equals the number of points minus one. For two points we have a linear curve (that’s a straight line), for three points – quadratic curve (parabolic), for four points – cubic curve.

A curve is always inside the convex hull of control points:

Because of that last property, in computer graphics it’s possible to optimize intersection tests. If convex hulls do not intersect, then curves do not either. So checking for the convex hulls intersection first can give a very fast “no intersection” result. Checking the intersection or convex hulls is much easier, because they are rectangles, triangles and so on (see the picture above), much simpler figures than the curve.

The main value of Bezier curves for drawing – by moving the points the curve is changing in intuitively obvious way.

References:

https://javascript.info/bezier-curve


Sunday, October 18, 2020

Chrome Networking Tab: Timing information

Below are the main properties from the Networking tab 


Queueing. The browser queues requests when:

There are higher priority requests.

There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.

The browser is briefly allocating space in the disk cache


Stalled. The request could be stalled for any of the reasons described in Queueing.

DNS Lookup. The browser is resolving the request's IP address.


Initial connection. The browser is establishing a connection, including TCP handshakes/retries and negotiating an SSL.

roxy negotiation. The browser is negotiating the request with a proxy serve

Request sent. The request is being sent.

ServiceWorker Preparation. The browser is starting up the service worker.

Request to ServiceWorker. The request is being sent to the service worker.

Waiting (TTFB). The browser is waiting for the first byte of a response. TTFB stands for Time To First Byte. This timing includes 1 round trip of latency and the time the server took to prepare the response.

Content Download. The browser is receiving the response.

Receiving Push. The browser is receiving data for this response via HTTP/2 Server Push.

Reading Push. The browser is reading the local data previously received.




References:

https://developers.google.com/web/tools/chrome-devtools/network/reference?utm_source=devtools#timing-explanation



Chrome Networking - Why does the protocol say h2?

Sometimes it was showing as http 1.1 and other sites was showing as h2. 

What is h2?

HTTP/1.1: the “classic” HTTP protocol, known and loved for over 15 years

SPDY/3.1: Google’s first version of the HTTP/2 spec, formed the basis of HTTP/2

H2-14: H2 stands for “HTTP 2”, the 14 will refer to “draft 14” since the HTTP/2 spec isn’t final yet

H2C-14: H2C stands for “HTTP 2 Cleartext”, the HTTP/2 protocol over a non-encrypted channel


Now main thing was 

Is there a way to force an XMLHttpRequest to use HTTP/1.1?


I have a server endpoint that supports both HTTP/1.1 and HTTP2. For testing purposes, I want to try downloading content from the endpoint with both HTTP/1.1 and HTTP2 connections, possibly at the same time.


When I request data from the endpoint with an XMLHttpRequest, it automatically uses HTTP2, without me including the Connection: Upgrade header.


Is there a way to force an XMLHttpRequest to use HTTP/1.1 for the underlying TCP connection? What about other protocols, such as Quic or SPDY?


However, the answer is NO. The browser decides which protocol it wants to use as an implementation detail of the XmlHttpRequest object. You can't force a particular choice from inside your script.



References:

https://ma.ttias.be/view-http-spdy-http2-protocol-google-chrome/

Friday, October 16, 2020

What is Turing test

 The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.


The test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence", while working at the University of Manchester (Turing, 1950; p. 460).[3] It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[4] Turing describes the new form of the problem in terms of a three-person game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[5] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[6]


References:

https://en.wikipedia.org/wiki/Turing_test

Pandas Merging Dataframe using SQL query



This is an excellent solution to merge two data frames 

import pandas as pd

import sqlite3

from datetime import datetime



#We'll use firelynx's tables:

presidents = pd.DataFrame({"name": ["Bush", "Obama", "Trump"],

                           "president_id":[43, 44, 45]})

terms = pd.DataFrame({'start_date': pd.date_range('2001-01-20', periods=5, freq='48M'),

                      'end_date': pd.date_range('2005-01-21', periods=5, freq='48M'),

                      'president_id': [43, 43, 44, 44, 45]})

war_declarations = pd.DataFrame({"date": [datetime(2001, 9, 14), datetime(2003, 3, 3)],

                                 "name": ["War in Afghanistan", "Iraq War"]})

#Make the db in memory

conn = sqlite3.connect(':memory:')

#write the tables

terms.to_sql('terms', conn, index=False)

presidents.to_sql('presidents', conn, index=False)

war_declarations.to_sql('wars', conn, index=False)


qry = '''

    select  

        start_date PresTermStart,

        end_date PresTermEnd,

        wars.date WarStart,

        presidents.name Pres

    from

        terms join wars on

        date between start_date and end_date join presidents on

        terms.president_id = presidents.president_id

    '''

df = pd.read_sql_query(qry, conn)



References:

https://stackoverflow.com/questions/30627968/merge-pandas-dataframes-where-one-value-is-between-two-others



Certificate Common name and Subject Alternate Name



WILDCARD(*) SSL CERTIFICATE IN COMMON NAME (CN)

By Nadeemullah Mohamed • Thursday March 31, 2016 • Custom Solutions

Recently I faced with an issue with a wildcard(*) in Common Name(CN) in SSL certificate. Invoking a SOAP end point over SSL through a standalone java web services client was complaining with an error “the https URL hostname does not match the Common Name (CN) on the server certificate in the client’s truststore.”


WildCard SSL certificates

Wildcard SSL certificates secure a website and an unlimited number of its first level subdomains. A SSL certificate with CN=*.mycompany.com is called a WildCard certificate. This wildcard SSL certificate would protect a.mycompany.com, b.mycompany.com, c.mycompany.com and so on and so forth. But this certificate will not work if the certificate is used for second, third and other sublevel domains, unless the sublevel domains are added in Subject Alternate Name(SAN) in the certificate.


Example

A SSL certificate which has CN=*.mycompany.com will not work for “blog.subdomain.mycompany.com”, unless “blog.subdomain.mycompany.com” is added in Subject Alternate Name (SAN).


Below describes the environment where I encountered the issue and also the solution for the issue:


Environment

A Dynamic Invocation Interface (DII) web service java client invoking .NET SOAP web service. The DII web service java client is a standalone java application running on JRE 7.


The Fully Qualified Domain Name(FQDN) in SOAP end point https://hostname.subdomain.mycompany.com


Certificate Setup

CN=*.mycompany.com


SubjectAlternativeName [

DNSName: *.mycompany.com

DNSName: mycompany.com

DNSName: subdomain.mycompany.com

]


Issue The way the SSL certificate was setup.


Solution

A new certificate was setup by adding “hostname.subdomain.mycompany.com(FQDN)” in Subject Alternate Name(SAN). Below is how the new certificate was setup which resolved the issue.


New Certificate Setup

CN=*.mycompany.com


SubjectAlternativeName [

DNSName: *.mycompany.com

DNSName: mycompany.com

DNSName: hostname.subdomain.mycompany.com

]



References:

https://www.idmworks.com/wildcard-ssl-certificate-in-common-name-cn/

Python Module Level logger



The logging library takes a modular approach and offers several categories of components: loggers, handlers, filters, and formatters.


Loggers expose the interface that application code directly uses.

Handlers send the log records (created by loggers) to the appropriate destination.

Filters provide a finer grained facility for determining which log records to output.

Formatters specify the layout of log records in the final output.



Logging is performed by calling methods on instances of the Logger class (hereafter called loggers). Each instance has a name, and they are conceptually arranged in a namespace hierarchy using dots (periods) as separators. For example, a logger named ‘scan’ is the parent of loggers ‘scan.text’, ‘scan.html’ and ‘scan.pdf’. Logger names can be anything you want, and indicate the area of an application in which a logged message originates.


A good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows:


logger = logging.getLogger(__name__)




References:

https://docs.python.org/3/howto/logging.html#logging-basic-tutorial


Docker how to configure container level Docker

sudo docker inspect -f '{{.HostConfig.LogConfig.Type}}'  78634dcef146

This gives which log driver is used. If json is used, it gives json-file as the output 

Very useful set of Docker commands


Refereces 

https://blog.softwaremill.com/how-to-keep-your-docker-installation-clean-98a74eb7e7b3

https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes

https://stackoverflow.com/questions/53221412/why-the-none-image-appears-in-docker-and-how-can-we-avoid-it



References:

What are two commands which can let know the running containers etc

 docker version 

docker info 


These two give good information about the Docker that is running. 


Cleaning up Docker containers


The command sudo du $dir -hk --max-depth=2 | sort -k2 was listing all the directories in alphabetical order 



https://github.com/docker/compose/issues/3262

https://stackoverflow.com/questions/51238891/how-to-fix-the-running-out-of-disk-space-error-in-docker

https://puvox.software/blog/solution-clean-full-disk-space-by-dev-vda1/

Docker Compose how to view log files

To view the log files , just can do one of the below 


docker-compose logs

docker-compose logs <<name of the service>>


Now to limit the output, can do something like this


docker-compose logs --no-color --tail=1000 <service-name> > logs.txt



References:

https://support.onegini.com/hc/en-us/articles/115000379192-How-to-show-docker-compose-container-logs


What Is a Docker Container?

 


What is Docker container, image 



A container is a unit of software that packages an application, making it easy to deploy and manage no matter the host. Say goodbye to the infamous “it works on my machine” statement!


How? Containers are isolated and stateless, which enables them to behave the same regardless of the differences in infrastructure. A Docker container is a runtime instance of an image that’s like a template for creating the environment you want.


What Is a Docker Image?

A Docker image is an executable package that includes everything that the application needs to run. This includes code, libraries, configuration files, and environment variables.


Why Do You Need Containers?

Containers allow breaking down applications into microservices – multiple small parts of the app that can interact with each other via functional APIs. Each microservice is responsible for a single feature so development teams can work on different parts of the application at the same time. That makes building an application easier and faster.



References:

https://sematext.com/guides/docker-logs/

Docker logger, what all it captures?

 By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format. The JSON format annotates each line with its origin (stdout or stderr) and its timestamp. Each log file contains information about only one container.


{"log":"Log line is here\n","stream":"stdout","time":"2019-01-01T11:11:11.111111111Z"}


References:

https://docs.docker.com/config/containers/logging/json-file/

What are core web vitals that affect Ranking - Part 1



Back in early May, Google introduced Core Web Vitals, a set of metrics designed to measure the quality of a website’s user experience. These metrics are related to page load time, interactivity and stability.


Now, Google has announced that they will combine Core Web Vitals with other factors such as mobile-friendliness, website security and the presence of intrusive interstitials to create a comprehensive evaluation of a page’s user experience.


This evaluation will also be incorporated into Google’s search algorithm, meaning Core Web Vitals will be used as a ranking factor.


What are Core Web Vitals?


According to Google, Core Web Vitals "measure dimensions of web usability such as load time, interactivity, and the stability of content as it loads (so you don’t accidentally tap that button when it shifts under your finger - how annoying!)."


Core Web Vitals are made up of 3 metrics:

Largest Contentful Paint (LCP) measures how long it takes a page to load and display the main page content. Aim for an LCP of 2.5 seconds or faster.


First Input Delay (FID) measures how long a user has to wait to interact with a page. A "good" FID is 100 milliseconds or less.



Cumulative Layout Shift (CLS) is the evaluation of how stable a page is as it loads. It measures how much the layout of a page shifts as it loads. Ideally, a page’s CLS should be no more than 0.1.


It’s worth noting, however, that the metrics scored in Core Web Vitals can shift and change as the web evolves. In fact, Google has said they anticipate incorporating more page experience factors into their ranking factors on a "yearly basis" as user expectations change.


How to use Core Web Vitals for your SEO


While the initial reaction to a new Google ranking factor might be annoyance, trepidation or frustration, tracking your site’s Core Web Vitals can help your SEO efforts quite a bit.


If you’ve been working in the SEO world for almost any amount of time you’ve probably noticed that Google constantly "advises" to site owners to provide their users with a “great experience” but didn’t really expound on what that might mean.


Well, now you have actual hard data you can track and analyze to ensure that you are, indeed providing users with a positive page experience.


Google Search Console Core Web Vitals


Google recently replaced the Speed Report in Google Search Console with the new Web Core Vitals report. This provides an overview of how all of your web pages perform against the new metrics, categorizing them as either red, for ‘poor URLs’, orange, for ‘URLs need improvement’, and green, for ‘good URLs’.




References

https://www.woorank.com/en/blog/google-core-web-vitals


WebSocket creating simple client and server


Over the past few years, a new type of communication started to emerge on the web and in mobile apps, called websockets. This protocol has been long-awaited and was finally standardized by the IETF in 2011, paving the way for widespread use.


How do Websockets Work?

At its core, a websocket is just a TCP connection that allows for full-duplex communication, meaning either side of the connection can send data to the other, even at the same time.



To establish this connection, the protocol actually initiates the handshake as a normal HTTP request, but then gets 'upgraded' using the upgrade request HTTP header, like this:


GET /ws/chat HTTP/1.1

Host: chat.example.com

Upgrade: websocket

Connection: Upgrade

Sec-WebSocket-Key: q1PZLMeDL4EwLkw4GGhADm==

Sec-WebSocket-Protocol: chat, superchat

Sec-WebSocket-Version: 15

Origin: http://example.com



Of the many different websocket libraries for Node.js available to us, I chose to use socket.io throughout this article because it seems to be the most popular and is, in my opinion, the easiest to use. While each library has its own unique API, they also have many similarities since they're all built on top of the same protocol, so hopefully you'll be able to translate the code below to any library you want to use.



Establishing the Connection

In order for a connection to be established between the client and server, the server must do two things:



Hook in to the HTTP server to handle websocket connections

Serve up the socket.io.js client library as a static resource




References:

https://stackabuse.com/node-js-websocket-examples-with-socket-io/

What is Measurement Lab NDT

NDT is a single stream performance measurement of a connection’s capacity for “bulk transport” (as defined in IETF’s RFC 3148. NDT measures “single stream performance” or “bulk transport capacity”. NDT reports upload and download speeds and latency metrics.


Run an NDT Test


Originally developed at Internet2, M-Lab has hosted NDT since our founding in 2009, and helped maintain and develop NDT for most of its history on the M-Lab platform. Over the last decade, there are three primary themes that have driven the evolution of NDT: standard kernel instrumentation, advances in TCP congestion control, and protocols and ports to support more clients


NDT Testing Protocols

As a part of our transition from the web100 version of NDT server to the new platform, M-Lab has named specific protocol versions for the original server and the new one we are now using.


web100 is the protocol refering to data collected by the current NDT server

Relied on the web100 kernel module for tcp statistics

Collected using the original version of NDT server

Used the Reno TCP congestion control algorithm

Retired in November 2019



ndt5 is a new NDT protocol designed to be backward compatible with past NDT clients

Relies on tcp-info for tcp statistics

Collected using M-Lab’s re-written ndt-server, which follows the legacy NDT protocol to support existing NDT clients that use it

Uses the Cubic TCP congestion control algorithm


ndt7 is a new NDT protocol that uses TCP BBR where available, operates on standard HTTP(S) ports (80, 443), and uses TCP_INFO instrumentation for TCP statistics

Relies on tcp-info for tcp statistics

Collected using M-Lab’s re-written ndt-server

Uses the BBR TCP congestion control algorithm, falling back to Cubic when BBR is not available in the client operating system


Data Collected by NDT


When you run NDT, the IP address provided by your Internet Service Provider will be collected along with your measurement results. M-Lab conducts the test and publishes all test results to promote Internet research. NDT does not collect any information about you as an Internet user.




References:

https://www.measurementlab.net/tests/ndt/


What is default timeout of WebSocket


pingTimeout 5000 how many ms without a pong packet to consider the connection closed

pingInterval 25000 how many ms before sending a new ping packet

upgradeTimeout 10000 how many ms before an uncompleted transport upgrade is cancelled


The pingTimeout and pingInterval parameters will impact the delay before a client knows the server is not available anymore. For example, if the underlying TCP connection is not closed properly due to a network issue, a client may have to wait up to pingTimeout + pingInterval ms before getting a disconnect event.


The order of the transports array is important. By default, a long-polling connection is established first, and then upgraded to WebSocket if possible. Using ['websocket'] means there will be no fallback if a WebSocket connection cannot be opened.


const server = require('http').createServer();


const io = require('socket.io')(server, {

  path: '/test',

  serveClient: false,

  // below are engine.IO options

  pingInterval: 10000,

  pingTimeout: 5000,

  cookie: false

});


server.listen(3000);




References:

https://socket.io/docs/server-api/#new-Server-httpServer-options


Cloud run - Best of both worlds

Develop and deploy highly scalable containerized applications on a fully managed serverless platform.

Container to production in seconds

Write code your way by deploying any container that listens for requests or events. Build applications in your favorite language, with your favorite dependencies and tools, and deploy them in seconds.

Fully managed

Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously—depending on traffic. Cloud Run only charges you for the exact resources you use.

Enhanced developer experience

Cloud Run makes app development and deployment simpler and faster. And it’s fully integrated with Cloud Code, Cloud Build, Cloud Monitoring, and Cloud Logging for an enhanced end-to-end developer experience.

Below are the key features

Any language, any library, any binary

Use the programming language of your choice, any language or operating system libraries, or even bring your own binaries.

Leverage container workflows and standards

Containers have become a standard to package and deploy code and its dependencies. Cloud Run pairs great with the container ecosystem: Cloud Build, Cloud Code, Artifact Registry, and Docker.


Pay‐per‐use

Only pay when your code is running, billed to the nearest 100 milliseconds.


Monday, October 12, 2020

Siofu file uploader analysis

Thre are multiple control parameters that can be used for file upload especially the chunk sizes 

Based on it, the WebSocket will be configured to send that many size in each packet. 

The chunk size really only tells about the 


42["siofu_chunk",{"id":0}] 26

05:47:08.264

451-["siofu_progress",{"id":0,"size":11242812,"start":10240000,"end":11242812,"content":{"_placeholder":true,"num":0},"base64":false}] 134

05:47:08.268

Binary Message 1.0 MB

05:47:08.275

42["siofu_done",{"id":0}] 25

05:47:08.282

42["siofu_chunk",{"id":0}] 26

05:47:09.900

42["siofu_complete",{"id":0,"success":true,"detail":{}}] 56

05:47:09.901

2 1

05:47:14.785

3 1

05:47:15.084

2 1

05:47:40.087

3 1

05:47:40.511

2 1

05:48:05.514

3 1

05:48:05.812

2 1

05:48:30.817

3 1

05:48:31.302

2 1

05:48:56.304

3 1

05:48:56.700

2 1

05:49:21.704

3 1

05:49:22.004

2 1

05:49:47.006

3 1

05:49:47.333

Data: 2probe, Length: 6, Time: 05:46:50.713

1

2probe



Trying with different chunk sizes, below are the results 


Whatever chunk size that is set before starting the upload, that many bytes of data is sent as WebSocket messages. 

The main drawback of the nam is it does not allow the chunk size to be altered after the upload operation starts.

This restricts us to change the sizes in middle of transfer. Also, the whole concept is based on file storing

This is not helping much as well as the size is constant. 


We need to find some alternative for this. 


References: 

https://www.npmjs.com/package/socketio-file-upload#progress-1

ESLint basics

When building web applications, Linting tools take a crucial role in our development process. Every developer should know what a Linter is, how to install and configure one, and how to use them efficiently making sure that the best code standards are applied to our project.


What is Linting

Linting is the process of evaluating and debugging the source code in the background by analyzing against a set of rules for programmatic and stylistic errors. This allows the developer to find errors before running the code. Rules also enforce best code standards and practices, better code quality, more readable, and easier to maintain.

To ensure good practices and standards several JavaScript Static Analyzer Tools emerged such as:

JSLint: a code quality tool, looks for problems on JavaScript programs.

JSHint: a community-driven tool that detects errors and potential problems in JS code.

ESLint: completely pluggable linter, allows the developer to create their own linting rules.

Flow: using data flow analysis, infers types, and tracks data flows in the code.

Prettier: an opinionated code formatter.

TSLint: extensible static tool analyzer for TypeScript.


ESLint is one of the most used linting tools and there is a reason for it. Highly configurable, it has a huge adoption from the community having hundreds of open-source configurations and plugins. It allows the configuration of several options like coding rules, environments, parser options, extend configurations, and use plugins.


On one hand, ESLint is responsible for checking against programming errors, on the other hand, we have Prettier an opinionated code formatter capable of finding any stylistic errors. It comes with some code style standards and is also easy to configure. It's easy to integrate with ESLint and has Code Editor extensions that can format the code on save!



References:

https://www.imaginarycloud.com/blog/how-to-configure-eslint-prettier-in-react/