Monday, November 30, 2020

What is .rst file reStructured Text

reStructuredText (RST, ReST, or reST) is a file format for textual data used primarily in the Python programming language community for technical documentation.


It is part of the Docutils project of the Python Doc-SIG (Documentation Special Interest Group), aimed at creating a set of tools for Python similar to Javadoc for Java or Plain Old Documentation (POD) for Perl. Docutils can extract comments and information from Python programs, and format them into various forms of program documentation.[1]


In this sense, reStructuredText is a lightweight markup language designed to be both (a) processable by documentation-processing software such as Docutils, and (b) easily readable by human programmers who are reading and writing Python source code.



Examples of reST markup


Headers

Section Header

==============


Subsection Header

-----------------


Lists

- A bullet list item

- Second item


  - A sub item


- Spacing between items separates list items


* Different bullet symbols create separate lists


- Third item


1) An enumerated list item


2) Second item


   a) Sub item that goes on at length and thus needs

      to be wrapped. Note the indentation that must

      match the beginning of the text, not the 

      enumerator.


      i) List items can even include


         paragraph breaks.


3) Third item


#) Another enumerated list item


#) Second item


Images

.. image:: /path/to/image.jpg



Named links

A sentence with links to `Wikipedia`_ and the `Linux kernel archive`_.


.. _Wikipedia: https://www.wikipedia.org/

.. _Linux kernel archive: https://www.kernel.org/



Anonymous links

Another sentence with an `anonymous link to the Python website`__.


__ https://www.python.org/



Literal blocks

::


  some literal text


This may also be used inline at the end of a paragraph, like so::


  some more literal text


.. code:: python


   print("A literal block directive explicitly marked as python code")




References:

https://en.wikipedia.org/wiki/ReStructuredText#:~:text=reStructuredText%20(RST%2C%20ReST%2C%20or,language%20community%20for%20technical%20documentation.

Python nose introduction



Nose’s tagline is “nose extends unittest to make testing easier”.

It’s is a fairly well known python unit test framework, and can run doctests, unittests, and “no boilerplate” tests.


It is a good candidate for a go-to test framework.


a smart developer should get familiar doctest, unittest, pytest, and nose. Then decide if one of those makes the most sense for them, or if they want to keep looking for features only found in other frameworks.



Nose fixtures

Nose extends the unittest fixture model of setup/teardown.


We can add specific code to run:


at the beginning and end of a module of test code (setup_module/teardown_module)

To get this to work, you just have to use the right naming rules.

at the beginning and end of a class of test methods (setup_class/teardown_class)

To get this to work, you have to use the right naming rules, and include the ‘@classmethod’ decorator.


before and after a test function call (setup_function/teardown_function)

You can use any name. You have to apply them with the ‘@with_setup’ decorator imported from nose.

You can also use direct assignment, which I’ll show in the example.


before and after a test method call (setup/teardown)

To get this to work, you have to use the right name.




References:

https://pythontesting.net/framework/nose/nose-introduction/


Python unittest intro

The unittest test framework is python’s xUnit style framework.

It is a standard module that you already have if you’ve got python version 2.1 or greater.



Overview of unittest

The unittest module used to be called PyUnit, due to it’s legacy as a xUnit style framework.

It works much the same as the other styles of xUnit, and if you’re familiar with unit testing in other languages, this framework (or derived versions), may be the most comfortable for you.


The standard workflow is:

1. You define your own class derived from unittest.TestCase.

2. Then you fill it with functions that start with ‘test_’.

3. You run the tests by placing unittest.main() in your file, usually at the bottom.



One of the many benifits of unittest, that you’ll use when your tests get bigger than the toy examples I’m showing on this blog, is the use of ‘setUp’ and ‘tearDown’ functions to get your system ready for the tests.


test_um_unittest.py:


import unittest

from unnecessary_math import multiply


class TestUM(unittest.TestCase):


    def setUp(self):

        pass


    def test_numbers_3_4(self):

        self.assertEqual( multiply(3,4), 12)


    def test_strings_a_3(self):

        self.assertEqual( multiply('a',3), 'aaa')


if __name__ == '__main__':

    unittest.main()


In this example, I’ve used assertEqual(). The unittest framework has a whole bunch of assertBlah() style functions like assertEqual(). Once you have a reasonable reference for all of the assert functions bookmarked, working with unnittest is pretty powerful and easy.


Aside from the tests you write, most of what you need to do can be accomplished with the test fixture methods such as setUp, tearDown, setUpClass, tearDownClass, etc.


Running unittests

At the bottom of the test file, we have this code:



if __name__ == '__main__':

    unittest.main()


This allows us to run all of the test code just by running the file.

Running it with no options is the most terse, and running with a ‘-v’ is more verbose, showing which tests

ran.



> python test_um_unittest.py

..

----------------------------------------------------------------------

Ran 2 tests in 0.000s


OK

> python test_um_unittest.py -v

test_numbers_3_4 (__main__.TestUM) ... ok

test_strings_a_3 (__main__.TestUM) ... ok


----------------------------------------------------------------------

Ran 2 tests in 0.000s


OK



Test discovery

Let’s say that you’ve got a bunch of test files. It would be annoying to have to run each test file separately. That’s where test discovery comes in handy.


In our case, all of my test code (one file for now) is in ‘simple_example’.

To run all of the unittests in there, use python -m unittest discover simple_example, with or without the ‘-v’, like this:




> python -m unittest discover simple_example

..

----------------------------------------------------------------------

Ran 2 tests in 0.000s


OK

> python -m unittest discover -v simple_example

test_numbers_3_4 (test_um_unittest.TestUM) ... ok

test_strings_a_3 (test_um_unittest.TestUM) ... ok


----------------------------------------------------------------------

Ran 2 tests in 0.000s


OK




References:

https://pythontesting.net/framework/unittest/unittest-introduction/

Python Mutable and immutable types

Mutable types are those that allow in-place modification of the content. Typical mutables are lists and dictionaries: All lists have mutating methods, like list.append() or list.pop(), and can be modified in place. The same goes for dictionaries.


Immutable types provide no method for changing their content. For instance, the variable x set to the integer 6 has no “increment” method. If you want to compute x + 1, you have to create another integer and give it a name.




my_list = [1, 2, 3]

my_list[0] = 4

print my_list  # [4, 2, 3] <- The same list has changed


x = 6

x = x + 1  # The new x is another object



Bad


# create a concatenated string from 0 to 19 (e.g. "012..1819")

nums = ""

for n in range(20):

    nums += str(n)   # slow and inefficient

print nums

Better


# create a concatenated string from 0 to 19 (e.g. "012..1819")

nums = []

for n in range(20):

    nums.append(str(n))

print "".join(nums)  # much more efficient

Best


# create a concatenated string from 0 to 19 (e.g. "012..1819")

nums = [str(n) for n in range(20)]

print "".join(nums)



References:

https://docs.python-guide.org/writing/structure/#the-actual-module


Python Context Managers

A context manager is a Python object that provides extra contextual information to an action. This extra information takes the form of running a callable upon initiating the context using the with statement, as well as running a callable upon completing all the code inside the with block. The most well known example of using a context manager is shown here, opening on a file:


with open('file.txt') as f:

    contents = f.read()


Anyone familiar with this pattern knows that invoking open in this fashion ensures that f’s close method will be called at some point. This reduces a developer’s cognitive load and makes the code easier to read.


There are two easy ways to implement this functionality yourself: using a class or using a generator. Let’s implement the above functionality ourselves, starting with the class approach:


class CustomOpen(object):

    def __init__(self, filename):

        self.file = open(filename)


    def __enter__(self):

        return self.file


    def __exit__(self, ctx_type, ctx_value, ctx_traceback):

        self.file.close()


with CustomOpen('file') as f:

    contents = f.read()


This is just a regular Python object with two extra methods that are used by the with statement. CustomOpen is first instantiated and then its __enter__ method is called and whatever __enter__ returns is assigned to f in the as f part of the statement. When the contents of the with block is finished executing, the __exit__ method is then called.





from contextlib import contextmanager


@contextmanager

def custom_open(filename):

    f = open(filename)

    try:

        yield f

    finally:

        f.close()


with custom_open('file') as f:

    contents = f.read()



This works in exactly the same way as the class example above, albeit it’s more terse. The custom_open function executes until it reaches the yield statement. It then gives control back to the with statement, which assigns whatever was yield’ed to f in the as f portion. The finally clause ensures that close() is called whether or not there was an exception inside the with.



Since the two approaches appear the same, we should follow the Zen of Python to decide when to use which. The class approach might be better if there’s a considerable amount of logic to encapsulate. The function approach might be better for situations where we’re dealing with a simple action.




References:

https://docs.python-guide.org/writing/structure/#the-actual-module


Python Decorators

The Python language provides a simple yet powerful syntax called ‘decorators’. A decorator is a function or a class that wraps (or decorates) a function or a method. The ‘decorated’ function or method will replace the original ‘undecorated’ function or method. Because functions are first-class objects in Python, this can be done ‘manually’, but using the @decorator syntax is clearer and thus preferred.


def foo():

    # do something


def decorator(func):

    # manipulate func

    return func


foo = decorator(foo)  # Manually decorate


@decorator

def bar():

    # Do something

# bar() is decorated


This mechanism is useful for separating concerns and avoiding external unrelated logic ‘polluting’ the core logic of the function or method. A good example of a piece of functionality that is better handled with decoration is memoization or caching: you want to store the results of an expensive function in a table and use them directly instead of recomputing them when they have already been computed. This is clearly not part of the function logic.



Decorators are very powerful and useful tool in Python since it allows programmers to modify the behavior of function or class. Decorators allow us to wrap another function in order to extend the behavior of wrapped function, without permanently modifying it.



Decorator seems to be very useful when doing some kind of profiling 


# importing libraries 

import time 

import math 

  

# decorator to calculate duration 

# taken by any function. 

def calculate_time(func): 

      

    # added arguments inside the inner1, 

    # if function takes any arguments, 

    # can be added like this. 

    def inner1(*args, **kwargs): 

  

        # storing time before function execution 

        begin = time.time() 

          

        func(*args, **kwargs) 

  

        # storing time after function execution 

        end = time.time() 

        print("Total time taken in : ", func.__name__, end - begin) 

  

    return inner1 

  

  

  

# this can be added to any function present, 

# in this case to calculate a factorial 

@calculate_time

def factorial(num): 

  

    # sleep 2 seconds because it takes very less time 

    # so that you can see the actual difference 

    time.sleep(2) 

    print(math.factorial(num)) 

  

# calling the function. 

factorial(10) 

Output:


3628800

Total time taken in :  factorial 2.0061802864074707





References:

https://docs.python-guide.org/writing/structure/#the-actual-module

https://www.geeksforgeeks.org/decorators-in-python/


Python OOP

In Python, everything is an object, and can be handled as such. This is what is meant when we say, for example, that functions are first-class objects. Functions, classes, strings, and even types are objects in Python: like any object, they have a type, they can be passed as function arguments, and they may have methods and properties. In this understanding, Python can be considered as an object-oriented language.



However, unlike Java, Python does not impose object-oriented programming as the main programming paradigm. It is perfectly viable for a Python project to not be object-oriented, i.e. to use no or very few class definitions, class inheritance, or any other mechanisms that are specific to object-oriented programming languages.


Moreover, as seen in the modules section, the way Python handles modules and namespaces gives the developer a natural way to ensure the encapsulation and separation of abstraction layers, both being the most common reasons to use object-orientation. Therefore, Python programmers have more latitude as to not use object-orientation, when it is not required by the business model.


There are some reasons to avoid unnecessary object-orientation. Defining custom classes is useful when we want to glue some state and some functionality together. The problem, as pointed out by the discussions about functional programming, comes from the “state” part of the equation.




In some architectures, typically web applications, multiple instances of Python processes are spawned as a response to external requests that happen simultaneously. In this case, holding some state in instantiated objects, which means keeping some static information about the world, is prone to concurrency problems or race conditions. Sometimes, between the initialization of the state of an object (usually done with the __init__() method) and the actual use of the object state through one of its methods, the world may have changed, and the retained state may be outdated. For example, a request may load an item in memory and mark it as read by a user. If another request requires the deletion of this item at the same time, the deletion may actually occur after the first process loaded the item, and then we have to mark a deleted object as read.


This and other issues led to the idea that using stateless functions is a better programming paradigm.


Another way to say the same thing is to suggest using functions and procedures with as few implicit contexts and side-effects as possible. A function’s implicit context is made up of any of the global variables or items in the persistence layer that are accessed from within the function. Side-effects are the changes that a function makes to its implicit context. If a function saves or deletes data in a global variable or in the persistence layer, it is said to have a side-effect.


Carefully isolating functions with context and side-effects from functions with logic (called pure functions) allows the following benefits:


Pure functions are deterministic: given a fixed input, the output will always be the same.

Pure functions are much easier to change or replace if they need to be refactored or optimized.

Pure functions are easier to test with unit tests: There is less need for complex context setup and data cleaning afterwards.

Pure functions are easier to manipulate, decorate, and pass around.



In summary, pure functions are more efficient building blocks than classes and objects for some architectures because they have no context or side-effects.


Obviously, object-orientation is useful and even necessary in many cases, for example when developing graphical desktop applications or games, where the things that are manipulated (windows, buttons, avatars, vehicles) have a relatively long life of their own in the computer’s memory.


References:

https://docs.python-guide.org/writing/structure/#the-actual-module

Python Packages

Python provides a very straightforward packaging system, which is simply an extension of the module mechanism to a directory.


Any directory with an __init__.py file is considered a Python package. The different modules in the package are imported in a similar manner as plain modules, but with a special behavior for the __init__.py file, which is used to gather all package-wide definitions.


A file modu.py in the directory pack/ is imported with the statement import pack.modu. This statement will look for __init__.py file in pack and execute all of its top-level statements. Then it will look for a file named pack/modu.py and execute all of its top-level statements. After these operations, any variable, function, or class defined in modu.py is available in the pack.modu namespace.


A commonly seen issue is adding too much code to __init__.py files. When the project complexity grows, there may be sub-packages and sub-sub-packages in a deep directory structure. In this case, importing a single item from a sub-sub-package will require executing all __init__.py files met while traversing the tree.



Leaving an __init__.py file empty is considered normal and even good practice, if the package’s modules and sub-packages do not need to share any code.


Lastly, a convenient syntax is available for importing deeply nested packages: import very.deep.module as mod. This allows you to use mod in place of the verbose repetition of very.deep.module.




References:

https://docs.python-guide.org/writing/structure/#the-actual-module

Python Modules

Python modules are one of the main abstraction layers available and probably the most natural one. Abstraction layers allow separating code into parts holding related data and functionality.


For example, a layer of a project can handle interfacing with user actions, while another would handle low-level manipulation of data. The most natural way to separate these two layers is to regroup all interfacing functionality in one file, and all low-level operations in another file. In this case, the interface file needs to import the low-level file. This is done with the import and from ... import statements.


As soon as you use import statements, you use modules. These can be either built-in modules such as os and sys, third-party modules you have installed in your environment, or your project’s internal modules.


To keep in line with the style guide, keep module names short, lowercase, and be sure to avoid using special symbols like the dot (.) or question mark (?). A file name like my.spam.py is the one you should avoid! Naming this way will interfere with the way Python looks for modules.


In the case of my.spam.py Python expects to find a spam.py file in a folder named my which is not the case. There is an example of how the dot notation should be used in the Python docs.


If you like, you could name your module my_spam.py, but even our trusty friend the underscore, should not be seen that often in module names. However, using other characters (spaces or hyphens) in module names will prevent importing (- is the subtract operator). Try to keep module names short so there is no need to separate words. And, most of all, don’t namespace with underscores; use submodules instead.



import library.plugin.foo

# not OK

import library.foo_plugin


Aside from some naming restrictions, nothing special is required for a Python file to be a module. But you need to understand the import mechanism in order to use this concept properly and avoid some issues.


Concretely, the import modu statement will look for the proper file, which is modu.py in the same directory as the caller, if it exists. If it is not found, the Python interpreter will search for modu.py in the “path” recursively and raise an ImportError exception when it is not found.


When modu.py is found, the Python interpreter will execute the module in an isolated scope. Any top-level statement in modu.py will be executed, including other imports if any. Function and class definitions are stored in the module’s dictionary.



Then, the module’s variables, functions, and classes will be available to the caller through the module’s namespace, a central concept in programming that is particularly helpful and powerful in Python.



In many languages, an include file directive is used by the preprocessor to take all code found in the file and ‘copy’ it into the caller’s code. It is different in Python: the included code is isolated in a module namespace, which means that you generally don’t have to worry that the included code could have unwanted effects, e.g. override an existing function with the same name.



It is possible to simulate the more standard behavior by using a special syntax of the import statement: from modu import *. This is generally considered bad practice. Using import * makes the code harder to read and makes dependencies less compartmentalized.


Using from modu import func is a way to pinpoint the function you want to import and put it in the local namespace. While much less harmful than import * because it shows explicitly what is imported in the local namespace, its only advantage over a simpler import modu is that it will save a little typing.


Very bad


[...]

from modu import *

[...]

x = sqrt(4)  # Is sqrt part of modu? A builtin? Defined above?

Better


from modu import sqrt

[...]

x = sqrt(4)  # sqrt may be part of modu, if not redefined in between

Best


import modu

[...]

x = modu.sqrt(4)  # sqrt is visibly part of modu's namespace




References:

https://docs.python-guide.org/writing/structure/#the-actual-module


React Popout example


This is a very good component for showing some popup window with HTML URL loaded in it. 


Easy and simple one 


npm install react-popout --save


Demo can be seen here http://jake.ginnivan.net/react-popout



import Popout from 'react-popout'


<Popout url='popout.html' title='Window title' onClosing={this.popupClosed}>

  <div>Popped out content!</div>

</Popout>




References:

https://www.npmjs.com/package/react-popout


How to structure python programs - Program Narrative

 Imagine the following high-level logical flow for a simple report generator program:


Read input data

Perform calculations

Write report

Notice how each stage (after the first one) depends on some byproduct or output of its predecessor:


Read input data

Perform calculations (based on input data)

Write report (based on calculated report data)



This is just about structuring the code in such a way that methods come like this below 



def read_input_file(filename):

    pass


def generate_report(data):

    pass


def write_report(report):

    pass


data = read_input_file('data.csv')

report = generate_report(data)

write_report(report)




References:

https://dbader.org/blog/how-to-structure-python-programs

Saturday, November 28, 2020

Node JS the node prompt to check the availability of a package

 alfred@alfred-laptop:~$ node

> require('wrench')

{ rmdirSyncRecursive: [Function],

  copyDirSyncRecursive: [Function],

  chmodSyncRecursive: [Function] }

>


Just above is good enough



References:

https://stackoverflow.com/questions/5594032/npm-module-installed-but-not-available

How to Start express and WebSocket server on same port

This is an excellent article. 

First, the http-server.js - a typical express app, except that we do not start the server with app.listen():

'use strict';


let fs = require('fs');

let express = require('express');

let app = express();

let bodyParser = require('body-parser');


app.use(bodyParser.json());


// Let's create the regular HTTP request and response

app.get('/', function(req, res) {


  console.log('Get index');

  fs.createReadStream('./index.html')

  .pipe(res);

});


app.post('/', function(req, res) {


  let message = req.body.message;

  console.log('Regular POST message: ', message);

  return res.json({


    answer: 42

  });

});


module.exports = app;


Now, the ws-server.js example, where we create the WSS server from a node native http.createServer(). Now, note that this is where we import the app, and give this native http.createServer the app instance to use.


Start the app with PORT=8080 node ws-server.js :


(Note you're launching the second, socket related, file (ws-server) not the first, http related, file (http-server).)



'use strict';


let WSServer = require('ws').Server;

let server = require('http').createServer();

let app = require('./http-server');


// Create web socket server on top of a regular http server

let wss = new WSServer({


  server: server

});


// Also mount the app here

server.on('request', app);


wss.on('connection', function connection(ws) {

 

  ws.on('message', function incoming(message) {

    

    console.log(`received: ${message}`);

    

    ws.send(JSON.stringify({


      answer: 42

    }));

  });

});



server.listen(process.env.PORT, function() {


  console.log(`http/ws server listening on ${process.env.PORT}`);

});



Finally, this sample index.html will work by creating both a POST and a Socket "request" and display the response:


<html>

<head>

  <title>WS example</title>

</head>


<body>

  <h2>Socket message response: </h2>

  <pre id="response"></pre>

  <hr/>

  <h2>POST message response: </h2>

  <pre id="post-response"></pre>

  <script>


  // Extremely simplified here, no error handling or anything

document.body.onload = function() {


    'use strict';


  // First the socket requesta

  function socketExample() {

    console.log('Creating socket');

    let socket = new WebSocket('ws://localhost:8080/');

    socket.onopen = function() {


      console.log('Socket open.');

      socket.send(JSON.stringify({message: 'What is the meaning of life, the universe and everything?'}));

      console.log('Message sent.')

    };

    socket.onmessage = function(message) {


      console.log('Socket server message', message);

      let data = JSON.parse(message.data);

      document.getElementById('response').innerHTML = JSON.stringify(data, null, 2);

    };

  }


  // Now the simple POST demo

  function postExample() {


    console.log('Creating regular POST message');

  

    fetch('/', {  

      method: 'post',  

      headers: {  

        "Content-type": "application/json"  

      },  

      body: JSON.stringify({message: 'What is the meaning of post-life, the universe and everything?'})  

    })

    .then(response => response.json())  

    .then(function (data) {  

    

      console.log('POST response:', data);

      document.getElementById('post-response').innerHTML = JSON.stringify(data, null, 2);   

    })  

    .catch(function (error) {  

      console.log('Request failed', error);  

    });   

  }


  // Call them both;


  socketExample();

  postExample();

}

  </script>

</body>

</html>




References:

https://stackoverflow.com/questions/34808925/express-and-websocket-listening-on-the-same-port 

Friday, November 27, 2020

Pandas Inner and outer joins

If there are two Dataframes, then inner merge will result in rows that are only matching and outer merge will be giving all the ones that are 

Also non matching. If thee columns are different in these two dfs, then the missing columns on the lesser column df will be substituted with 

NULL 

data = {'Name':['Jai', 'Princi', 'Gaurav', 'Anuj'], 

        'Age':[27, 24, 22, 32], 

        'Address':['Delhi', 'Kanpur', 'Allahabad', 'Kannauj'], 

        'Qualification':['Msc', 'MA', 'MCA', 'Phd']} 

  

# Convert the dictionary into DataFrame  

df = pd.DataFrame(data) 


data2 = {'Name':['Jai', 'Princi', 'Gaurav', 'Anuj', 'Retheesh'], 

        'Score':[2, 24, 22, 32, 50], 

        } 

  

# Convert the dictionary into DataFrame  

df2 = pd.DataFrame(data2) 

  

# select two columns from second data frame

df2 = df2[['Name', 'Score']] 


df_merged = df2.merge(df, how = 'outer', on='Name')

print('merged df')

print(df_merged)



Below is response of outer merge output 


merged df

       Name  Score   Age    Address Qualification

0       Jai      2  27.0      Delhi           Msc

1    Princi     24  24.0     Kanpur            MA

2    Gaurav     22  22.0  Allahabad           MCA

3      Anuj     32  32.0    Kannauj           Phd

4  Retheesh     50   NaN        NaN           NaN



References:

https://stackoverflow.com/questions/45175060/merge-dataframes-with-matching-values-from-two-different-columns-pandas

Thursday, November 26, 2020

Excel nesting if conditions

Syntax

The syntax for the nesting the IF function is:


IF( condition1, value_if_true1, IF( condition2, value_if_true2, value_if_false2 ))

This would be equivalent to the following IF THEN ELSE statement:


IF condition1 THEN

   value_if_true1

ELSEIF condition2 THEN

   value_if_true2

ELSE

   value_if_false2

END IF

Parameters or Arguments

condition

The value that you want to test.

value_if_true

The value that is returned if condition evaluates to TRUE.

value_if_false

The value that is return if condition evaluates to FALSE.

Note

This Nested IF function syntax demonstrates how to nest two IF functions. You can nest up to 7 IF functions.



References:

https://www.techonthenet.com/excel/formulas/if_nested.php


Wednesday, November 25, 2020

Neo4J What is MERGE query

MERGE either matches existing nodes and binds them, or it creates new data and binds that. It’s like a combination of MATCH and CREATE that additionally allows you to specify what happens if the data was matched or created.


For example, you can specify that the graph must contain a node for a user with a certain name. If there isn’t a node with the correct name, a new node will be created and its name property set.


The last part of MERGE is the ON CREATE and ON MATCH. These allow a query to express additional changes to the properties of a node or relationship, depending on if the element was MATCH -ed in the database or if it was CREATE -ed.


The following graph is used for the examples below:


Some examples 


MATCH (person:Person)

MERGE (city:City { name: person.bornIn })

RETURN person.name, person.bornIn, city


Assume there were 3 persons with different bornIn locations this one will create 


References:

https://neo4j.com/docs/cypher-manual/current/clauses/merge/#query-merge-node-derived


Monday, November 23, 2020

What is Trunks Integrated Record Keeping System (TIRKS)

 Trunks Integrated Record Keeping System (TIRKS) is an operations support system from Telcordia Technologies (since acquired by Ericsson, Inc.), originally developed by the Bell System during the late 1970s. It was developed for inventory and order control management of interoffice trunk circuits that interconnect telephone switches. It grew to encompass and automate many functions required to build the ever-expanding data transport network.


Supporting circuits from POTS and 150 baud modems up through T1, DS3, SONET and DWDM, it continues to evolve today, and unlike many software technologies today, provides complete backward compatibility. TIRKS was recently updated with a Java GUI, XML API, and WORD Sketch, which provides graphical views of the TIRKS Work Order Record and Details Document as well as SONET and DWDM networks. When TIRKS became a registered trademark in 1987, it became technically improper to use it as an acronym. TIRKS was one of many OSS technologies transferred to Bell Communications Research as part of the Modification of Final Judgment related to the AT&T divestiture on January 1, 1984. In the 1990s, the Facility and Equipment Planning System (FEPS) and Planning Workstation System (PWS) products were incorporated into the Telcordia TIRKS CE System. TIRKS is still in use at AT&T, Verizon, CenturyLink, and Cincinnati Bell Telephone.



References:

https://en.wikipedia.org/wiki/Trunks_Integrated_Record_Keeping_System


What is TL1

Transaction Language 1 (TL1) is a widely used management protocol in telecommunications. It is a cross-vendor, cross-technology man-machine language, and is widely used to manage optical (SONET) and broadband access infrastructure in North America. TL1 is used in the input and output messages that pass between Operations Support Systems (OSSs) and Network Elements (NEs). Operations domains such as surveillance, memory administration, and access and testing define and use TL1 messages to accomplish specific functions between the OS and the NE


TL1 is defined in Telcordia Technologies (formerly Bellcore) Generic Requirements document GR-831-CORE


TL1 was developed by Bellcore in 1984 as a standard man-machine language to manage network elements for the Regional Bell Operating Companies (RBOCs). It is based on Z.300 series man machine language standards. TL1 was designed as a standard protocol readable by machines as well as humans to replace the diverse ASCII based protocols used by different Network Element (NE) vendors. It is extensible to incorporate vendor specific commands.


Telcordia OSSs such as NMA (Network Monitoring and Analysis) used TL1 as the element management (EMS) protocol. This drove network element vendors to implement TL1 in their devices.


The TL1 language consists of a set of messages. There are 4 kinds of messages:


Input message - This is the command sent by the user or the OSS.

Output/Response message - This is reply sent by the NE(Network Element) in response to an input message.

Acknowledgment message - This is an acknowledgment of the receipt of a TL1 input message and is sent if the response message will be delayed by more than 2 seconds.

Autonomous message - These are asynchronous messages (usually events or alarms) sent by the NE.



TL1 message structure

TL1 messages follow a fixed structure, and all commands must conform to it. However, the commands themselves are extensible and new commands can be added by NE vendors.


These are some of the message components:


Target identifier (TID) & Source identifier (SID) - TID/SID is a unique name assigned to each NE. TID is used to route the message to an NE, SID is used to identify the source of an autonomous message.

Access identifier (AID) - AID identifies an entity within an NE.

Correlation tag (CTAG) & Autonomous correlation tag (ATAG) - CTAG/ATAG are numbers used to correlate messages.




References :

https://en.wikipedia.org/wiki/Transaction_Language_1


What is Server Cage

A server cage is a specific kind of container for physical server hardware.

Like the traditional cage, server cages have open systems composed of metal bars or similar structures, where light and air can move through the enclosure, but where the cage provides effective security for what's inside.



With larger data centers or other server operations, businesses can put server cage security in place for several reasons. For an in-house server system, business leaders may want extra security for high traffic areas.

Another popular use of server cages is when a single data center handles server operations for multiple clients. Here, it can be very useful to put each client’s separate server hardware in a different cage to prevent unauthorized crossover.


For instance, technicians serving a particular client structure can get a key to just one server cage, instead of being able to access the entire hardware setup in the server room (this can protect the worker, as well as the company, in a case where some kind of server emergency requires a careful look at access) - server cages can also help to provide better documentation for who is accessing server systems for maintenance, repairer or other purposes.





References:

https://www.techopedia.com/definition/153/server-cage

Friday, November 20, 2020

React Flow chart API

The React Flow is an awesome library to include charts inside the application. 

React Flow is a library for building node-based applications. These can be simple static diagrams or complex node-based editors. You can implement custom node types and edge types and it comes with components like a mini-map and graph controls

Easy to use: Seamless zooming & panning behaviour and single and multi-selections of elements

Customizable: Different node and edge types and support for custom nodes with multiple handles and custom edges

Fast rendering: Only nodes that have changed are re-rendered and only those that are in the viewport are displayed

Utils: Snap-to-grid and graph helper functions

Components: Background, Minimap and Controls

Reliable: Written in Typescript and tested with cypress

references:

https://reactflow.dev/docs/api/nodes/


React JS - Speech to text

A React hook that converts speech from the microphone to text and makes it available to your React components.

Under the hood, it uses Web Speech API. Note that browser support for this API is currently limited. But most of the browsers it seems to be working well as well, So this is good library for most of the purposes. 

Basic example is below 

import React from 'react'

import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'


const Dictaphone = () => {

  const { transcript, resetTranscript } = useSpeechRecognition()


  if (!SpeechRecognition.browserSupportsSpeechRecognition()) {

    return null

  }


  return (

    <div>

      <button onClick={SpeechRecognition.startListening}>Start</button>

      <button onClick={SpeechRecognition.stopListening}>Stop</button>

      <button onClick={resetTranscript}>Reset</button>

      <p>{transcript}</p>

    </div>

  )

}

export default Dictaphone



references:

https://www.npmjs.com/package/react-speech-recognition


What is SNMP

Simple Network Management Protocol (SNMP) is a way for different devices on a network to share information with one another. It allows devices to communicate even if the devices are different hardware and run different software.


Without a protocol like SNMP, there would be no way for network management tools to identify devices, monitor network performance, keep track of changes to the network, or determine the status of network devices in real time.


SNMP architecture


SNMP has a simple architecture based on a client-server model. The servers, called managers, collect and process information about devices on the network.


The clients, called agents, are any type of device or device component connected to the network. They can include not just computers but also network switches, phones, printers, and so on. Some devices may have multiple device components. For example, a laptop typically contains a wired as well as a wireless network interface.



SNMP data hierarchy


To provide flexibility and extensibility, SNMP doesn’t require network devices to exchange data in a rigid format of fixed size. Instead, it uses a tree-like format, under which data is always available for managers to collect.


The data tree consists of multiple tables (or branches, if you want to stick with the tree metaphor), which are called Management Information Bases, or MIBs. MIBs group together particular types of devices or device components. Each MIB has a unique identifying number, as well as an identifying string. Numbers and strings can be used interchangeably (just like IP addresses and hostnames).


Each MIB consists of one or more nodes, which represent individual devices or device components on the network. In turn, each node has a unique Object Identifier, or OID. The OID for a given node is determined by the identifier of the MIB on which it exists combined with the node’s identifier within its MIB.


This means OIDs take the form of a set of numbers or strings (again, you can use these interchangeably). An example is 1.3.6.1.4.868.2.4.1.2.1.1.1.3.3562.3.


Written with strings, that OID would translate to:


iso.org.dod.internet.private.transition.products.chassis.card.slotCps­.

cpsSlotSummary.cpsModuleTable.cpsModuleEntry.cpsModuleModel.3562.3.


Using the OID, a manager can query an agent to find information about a device on the network. For example, if the manager wants to know whether an interface is up, it would first query the interface MIB (called the IF-MIB), then check the OID value that reflects operational status to determine whether the interface is up.Each MIB consists of one or more nodes, which represent individual devices or device components on the network. In turn, each node has a unique Object Identifier, or OID. The OID for a given node is determined by the identifier of the MIB on which it exists combined with the node’s identifier within its MIB.


This means OIDs take the form of a set of numbers or strings (again, you can use these interchangeably). An example is 1.3.6.1.4.868.2.4.1.2.1.1.1.3.3562.3.


Written with strings, that OID would translate to:


iso.org.dod.internet.private.transition.products.chassis.card.slotCps­.

cpsSlotSummary.cpsModuleTable.cpsModuleEntry.cpsModuleModel.3562.3.


Using the OID, a manager can query an agent to find information about a device on the network. For example, if the manager wants to know whether an interface is up, it would first query the interface MIB (called the IF-MIB), then check the OID value that reflects operational status to determine whether the interface is up.


SNMP versions


The first version of SNMP—SNMPv1—offers weak security features. Under SNMPv1, managers can authenticate to agents without encryption when requesting information. That means anyone with access to the network could run “sniffing” software to intercept information about the network. It also means an unauthorized device can easily pretend to be a legitimate manager when controlling the network.


As well, SNMPv1 uses certain default credentials, which admins don’t always update, making it easy for unauthorized parties to gain access to sensitive information about the network. Unfortunately, SNMPv1 is still used on a relatively wide basis today because some networks haven’t yet updated.


SNMPv2, which appeared in 1993, offered some security enhancements but it was supplanted in 1998 by SNMPv3, which remains the most recent version of the protocol and the most secure.


SNMPv3 makes data encryption possible. It also allows admins to specify different authentication requirements on a granular basis for managers and agents. This prevents unauthorized authentication and can optionally be used to require encryption for data transfers.


The bottom line is that, while the security issues in SNMPv1 earned SNMP a bad name in some circles, SNMPv2 and especially SNMPv3 solved those problems. The newer versions of SNMP provide an up-to-date, secure way to monitor the network.



References:

https://www.auvik.com/franklyit/blog/network-basics-what-is-snmp/


Thursday, November 19, 2020

Campus network topology

A topology of a typical campus network is illustrated in Fig.

 Typically, it comprises static Access Points (APs), a set of switches, and gateway routers. Each AP serves multiple mobile users and connects them directly or through a multi-hop wireless routing to the wired backbone.




references:

https://www.researchgate.net/figure/Typical-campus-network-topology_fig1_284204281#:~:text=A%20topology%20of%20a%20typical,routing%20to%20the%20wired%20backbone.

What is Closed Loop Operation

Closed-loop Systems use feedback where a portion of the output signal is fed back to the input to reduce errors and improve stability

Systems in which the output quantity has no effect upon the input to the control process are called open-loop control systems, and that open-loop systems are just that, open ended non-feedback systems.

But the goal of any electrical or electronic control system is to measure, monitor, and control a process and one way in which we can accurately control the process is by monitoring its output and “feeding” some of it back to compare the actual output with the desired output so as to reduce the error and if disturbed, bring the output of the system back to the original or desired response.




References:

https://www.electronics-tutorials.ws/systems/closed-loop-system.html

What is a Northbound Interface?

In computer networking and computer architecture, a northbound interface of a component is an interface that allows the component to communicate with a higher level component, using the latter component's southbound interface. The northbound interface conceptualizes the lower level details (e.g., data or functions) used by, or in, the component, allowing the component to interface with higher level layers.[1]


In architectural overviews, the northbound interface is normally drawn at the top of the component it is defined in; hence the name northbound interface. A southbound interface decomposes concepts in the technical details, mostly specific to a single component of the architecture. Southbound interfaces are drawn at the bottom of an architectural overview.

Typical use


A northbound interface is typically an output-only interface (as opposed to one that accepts user input) found in carrier-grade network and telecommunications network elements. The languages or protocols commonly used include SNMP and TL1. For example, a device that is capable of sending out syslog messages but that is not configurable by the user is said to implement a northbound interface. Other examples include SMASH, IPMI, WSMAN, and SOAP.


The term is also important for software-defined networking (SDN), to facilitate communication between the physical devices, the SDN software and applications running on the network



References:

https://en.wikipedia.org/wiki/Northbound_interface


What is frame layout

FrameLayout is the simplest implementation of ViewGroup. Child views are drawn are in a stack, where the latest added view is drawn at the top. Usually you can use one of the next approaches or combine them:

Add a single view hierarchy into FrameLayout

Add multiple children and use android:layout_gravity to navigate them




references:

https://www.tutorialspoint.com/android/android_frame_layout.htm


Wednesday, November 18, 2020

Python Virtual Environment - an Insight

Python applications will often use packages and modules that don’t come as part of the standard library. Applications will sometimes need a specific version of a library, because the application may require that a particular bug has been fixed or the application may be written using an obsolete version of the library’s interface.


This means it may not be possible for one Python installation to meet the requirements of every application. If application A needs version 1.0 of a particular module but application B needs version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will leave one application unable to run.


The solution for this problem is to create a virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages.


Different applications can then use different virtual environments. To resolve the earlier example of conflicting requirements, application A can have its own virtual environment with version 1.0 installed while application B has another virtual environment with version 2.0. If application B requires a library be upgraded to version 3.0, this will not affect application A’s environment.




References:

https://docs.python.org/3/tutorial/venv.html


Monday, November 16, 2020

Python how to quickly activate the virtual environment

The venv module provides support for creating lightweight “virtual environments” with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (which matches the version of the binary that was used to create this environment) and can have its own independent set of installed Python packages in its site directories.


python3 -m venv /path/to/new/virtual/environment


Running this command creates the target directory (creating any parent directories that don’t exist already) and places a pyvenv.cfg file in it with a home key pointing to the Python installation from which the command was run (a common name for the target directory is .venv). It also creates a bin (or Scripts on Windows) subdirectory containing a copy/symlink of the Python binary/binaries (as appropriate for the platform or arguments used at environment creation time). It also creates an (initially empty) lib/pythonX.Y/site-packages subdirectory (on Windows, this is Lib\site-packages). If an existing directory is specified, it will be re-used.


References 

https://docs.python.org/3/library/venv.html


ThreeJS experiments with two scenes on top of another

In this example, there were two scenes, two cameras. 


The main renderer is added using the code below  

document.body.appendChild(renderer.domElement);


The HUD one is not added not as an element but it is rendering on the canvas element as a texture

When the underlying canvas content get changed, the renderer re-draws it 


var hudCanvas = document.createElement('canvas');

var hudBitmap = hudCanvas.getContext('2d');


hudBitmap.beginPath();

hudBitmap.rect(20, 20, 150, 100);

hudBitmap.fillStyle = "red";

hudBitmap.fill();



var cameraHUD = new THREE.OrthographicCamera(-width / 2, width / 2, height / 2, -height / 2, 0, 30);


// Create also a custom scene for HUD.

var sceneHUD = new THREE.Scene();


// Create texture from rendered graphics.

var hudTexture = new THREE.Texture(hudCanvas)

hudTexture.needsUpdate = true;


// Create HUD material.

var material = new THREE.MeshBasicMaterial({ map: hudTexture });

material.transparent = true;


// Create plane to render the HUD. This plane fill the whole screen.

var planeGeometry = new THREE.PlaneGeometry(width, height);

var plane = new THREE.Mesh(planeGeometry, material);

sceneHUD.add(plane);




Now we can create a div in the html and ask the renderer to draw with that div. This is done using 


Now created a div like this 


<style>

#container {

background-color: #ff0000;

width: 600px;

height: 600px;

border: 1px solid black;

}

</style>


<div id='container'></div>


And changed the code to be like this below 


var container = document.getElementById('container');


var width = container.offsetWidth;;

var height = container.offsetHeight;


container.appendChild(renderer.domElement);


This was displaying the item in the area given in the div. 


Also the HUD was displayingg the centre as well. 


So the main factor is that if the main scene is moving in animation loop, 

the canvas element to be re-rendered in the  animate loop as well.



Refrecences:

http://localhost/threejs/examples/webgl_sprites2_modified.html

Sunday, November 15, 2020

Create React App - Using Public folder

The public folder contains the HTML file so you can tweak it, for example, to set the page title. The <script> tag with the compiled code will be added to it automatically during the build process

Adding Assets Outside of the Module System

We can also add other assets to the public folder.

Note that we normally encourage you to import assets in JavaScript files instead. For example, see the sections on adding a stylesheet and adding images and fonts. This mechanism provides a number of benefits:

Scripts and stylesheets get minified and bundled together to avoid extra network requests.

Missing files cause compilation errors instead of 404 errors for your users.

Result filenames include content hashes so you don’t need to worry about browsers caching their old versions.

However there is an escape hatch that you can use to add an asset outside of the module system.

If you put a file into the public folder, it will not be processed by webpack. Instead it will be copied into the build folder untouched. To reference assets in the public folder, you need to use an environment variable called PUBLIC_URL.

Inside index.html, you can use it like this:

Copy

<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />

Only files inside the public folder will be accessible by %PUBLIC_URL% prefix. If you need to use a file from src or node_modules, you’ll have to copy it there to explicitly specify your intention to make this file a part of the build.

When you run npm run build, Create React App will substitute %PUBLIC_URL% with a correct absolute path so your project works even if you use client-side routing or host it at a non-root URL.

In JavaScript code, you can use process.env.PUBLIC_URL for similar purposes:

render() {

  // Note: this is an escape hatch and should be used sparingly!

  // Normally we recommend using `import` for getting asset URLs

  // as described in “Adding Images and Fonts” above this section.

  return <img src={process.env.PUBLIC_URL + '/img/logo.png'} />;

}



Keep in mind the downsides of this approach:


None of the files in public folder get post-processed or minified.

Missing files will not be called at compilation time, and will cause 404 errors for your users.

Result filenames won’t include content hashes so you’ll need to add query arguments or rename them every time they change.



When to Use the public Folder#

Normally we recommend importing stylesheets, images, and fonts from JavaScript. The public folder is useful as a workaround for a number of less common cases:


You need a file with a specific name in the build output, such as manifest.webmanifest.

You have thousands of images and need to dynamically reference their paths.

You want to include a small script like pace.js outside of the bundled code.

Some library may be incompatible with webpack and you have no other option but to include it as a <script> tag.

Note that if you add a <script> that declares global variables, you should read the topic Using Global Variables in the next section which explains how to reference them.




references:

https://create-react-app.dev/docs/using-the-public-folder

Android Chip component

 A Chip is a component that can represent input, filter, choice or action of a user

But this required the theme to be inherited from Material theme, else it was giving this error


android.view.InflateException: Binary XML file line #142: Binary XML file line #142: Error inflating class com.google.android.material.chip.Chip


adjusting the style to below gave this error to go away


<style name="AppTheme" parent="Theme.MaterialComponents.Light.NoActionBar">

        <!-- Customize your theme here. -->

        <item name="colorPrimary">@color/colorPrimary</item>

        <item name="colorPrimaryDark">@color/colorPrimaryDark</item>

        <item name="colorAccent">@color/colorAccent</item>

        <item name="chipIconTint">@color/chipIconTint</item>


        <item name="android:colorButtonNormal">@drawable/button_selector</item>

        <item name="colorButtonNormal">@drawable/button_selector</item>

        <item name="android:buttonStyle">@style/FriendlyButtonStyle</item>

    </style>


<com.google.android.material.chip.ChipGroup

            android:id="@+id/chipGroup"

            android:layout_width="wrap_content"

            android:layout_height="wrap_content"

            android:layout_centerInParent="true">


            <com.google.android.material.chip.Chip

                android:id="@+id/chip6"

                style="@style/Widget.MaterialComponents.Chip.Filter"

                android:layout_width="wrap_content"

                android:layout_height="wrap_content"

                android:text="Fire" />


            <com.google.android.material.chip.Chip

                android:id="@+id/chip7"

                style="@style/Widget.MaterialComponents.Chip.Filter"

                android:layout_width="wrap_content"

                android:layout_height="wrap_content"

                android:text="Water" />


            <com.google.android.material.chip.Chip

                android:id="@+id/chip8"

                style="@style/Widget.MaterialComponents.Chip.Filter"

                android:layout_width="wrap_content"

                android:layout_height="wrap_content"

                android:text="Psychic" />

        </com.google.android.material.chip.ChipGroup>



references:

https://medium.com/material-design-in-action/chips-material-components-for-android-46001664a40f#:~:text=A%20Chip%20is%20a%20component,or%20action%20of%20a%20user.


Sails + React . How to download a file

Below is the code from React side

import axios from 'axios'

import fileDownload from 'js-file-download'


axios.post(SERVER_URL+ 'schools/search/download', { 'action' : 'download' }, options)

      .then((res) => {

        console.log('res.data is ',res.data);

        fileDownload(res.data, 'search_no_results.csv')

      })




/below is the code from Sails side 


let file = require('path').resolve('noresults.txt');

        const fs = require('fs');

        if(fs.existsSync('noresults.txt'))

        {

            res.setHeader('Content-disposition', 'attachment; filename=' + 'search_no_results.csv');

            let filestream = fs.createReadStream(file);

            filestream.pipe(res);

        }else{

            res.json({error : "File not Found"});

        }




references:


Mongo DB, subdocument array queries

the requirement is to remove an array element which is a dictionary from a mongo collection record 

that is not not meeting some requirement 

db.getCollection('class').find( {"hours.0.day":{$eq:"mon-fri"}}).count()

db.getCollection('class').update( {"hours.0.day":{$eq:"mon-fri"}},{

    $pull: {

      "hours": {

        "day": "mon-fri"

      }

    }

  })

references:

https://docs.mongodb.com/manual/reference/method/db.collection.update/


Saturday, November 14, 2020

Rotation in ThreeJS

Rotation requires a little more care than translation or scaling. There are several reasons for this, but the main one is the order of rotation matters. If we translate or scale an object on the X-axis, Y-axis, and Z-axis, it doesn’t matter which axis goes first.

these three rotations may not give the same result:

Rotate around X-axis, then around the Y-axis, then around the Z-axis.

Rotate around Y-axis, then around the X-axis, then around the Z-axis.

Rotate around Z-axis, then around the X-axis, then around the Y-axis.


Representing Rotations: the Euler class#

Euler Angles are represented in three.js using the Euler class. As with .position and .scale, an Euler instance is automatically created and given default values when we create a new scene object.


// when we create a mesh...

const mesh = new Mesh();


// ... internally, three.js creates a Euler for us:

mesh.rotation = new Euler();


As with Vector3, there are .x, .y and .z properties and a .set method:


mesh.rotation.x = 2;

mesh.rotation.y = 2;

mesh.rotation.z = 2;


mesh.rotation.set(2, 2, 2);


Once again, we can create Euler instances ourselves:



import { Euler } from 'three';


const euler = new Euler(1, 2, 3);


Euler Rotation Order


By default, three.js will perform rotations around the X-axis, then around the Y-axis, and finally around the Z-axis, in an object’s local space. We can change this using the Euler.order property. The default order is called ‘XYZ’, but ‘YZX’, ‘ZXY’, ‘XZY’, ‘YXZ’ and ‘ZYX’ are also possible.


The Unit of Rotation is Radians

The Other Rotation Class: Quaternions#

The second, which we’ll mention only in passing here, is the Quaternion class. Along with the Euler, a Quaternion is created for us and stored in the .quaternion property whenever we create a new scene object such as a mesh:

// when we create a mesh

const mesh = new Mesh();


// ... internally, three.js creates an Euler for us:

mesh.rotation = new Euler();


// .. AND a Quaternion:

mesh.quaternion = new Quaternion();

We can use Quaternions and Euler angles interchangeably. When we change mesh.rotation, the mesh.quaternion property is automatically updated, and vice-versa. This means we can use Euler angles when it suits us, and switch to Quaternions when it suits us.

Euler angles have a couple of shortcomings that become apparent when creating animations or doing math involving rotations. In particular, we cannot add two Euler angles together (more famously, they also suffer from something called gimbal lock). Quaternions don’t have these shortcomings. On the other hand, they are harder to use than Euler angles, so for now we’ll stick with the simpler Euler class.

or now, make a note of these two ways to rotate an object:

Using Euler angles, represented using the Euler class and stored in the .rotation property.

Using Quaternions, represented using the Quaternion class and stored in the .quaternion property.


References:

https://discoverthreejs.com/book/first-steps/transformations/

What s Euler angles

The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system.[1]

They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in 3-dimensional linear algebra. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering.




references:

https://en.wikipedia.org/wiki/Euler_angles

What is a Gimbal Lock

Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.


The word lock is misleading: no gimbal is restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes there is no gimbal available to accommodate rotation about one axis.





References:

https://en.wikipedia.org/wiki/Gimbal_lock

Shaders a high level overview

What is a fragment shader?

Shaders are also a set of instructions, but the instructions are executed all at once for every single pixel on the screen. That means the code you write has to behave differently depending on the position of the pixel on the screen. Like a type press, your program will work as a function that receives a position and returns a color, and when it's compiled it will run extraordinarily fast.

Why are shaders fast?

Imagine the CPU of your computer as a big industrial pipe, and every task as something that passes through it - like a factory line. Some tasks are bigger than others, which means they require more time and energy to deal with. We say they require more processing power. Because of the architecture of computers the jobs are forced to run in a series; each job has to be finished one at a time. Modern computers usually have groups of four processors that work like these pipes, completing tasks one after another to keep things running smoothly. Each pipe is also known as a thread.


Video games and other graphic applications require a lot more processing power than other programs. Because of their graphic content they have to do huge numbers of pixel-by-pixel operations. Every single pixel on the screen needs to be computed, and in 3D games geometries and perspectives need to be calculated as well.


Let's go back to our metaphor of the pipes and tasks. Each pixel on the screen represents a simple small task. Individually each pixel task isn't an issue for the CPU, but (and here is the problem) the tiny task has to be done to each pixel on the screen! That means in an old 800x600 screen, 480,000 pixels have to processed per frame which means 14,400,000 calculations per second! Yes! That’s a problem big enough to overload a microprocessor. In a modern 2880x1800 retina display running at 60 frames per second that calculation adds up to 311,040,000 calculations per second. How do graphics engineers solve this problem?

This is when parallel processing becomes a good solution. Instead of having a couple of big and powerful microprocessors, or pipes, it is smarter to have lots of tiny microprocessors running in parallel at the same time. That’s what a Graphic Processor Unit (GPU) is.

Picture the tiny microprocessors as a table of pipes, and the data of each pixel as a ping pong ball. 14,400,000 ping pong balls a second can obstruct almost any pipe. But a table of 800x600 tiny pipes receiving 30 waves of 480,000 pixels a second can be handled smoothly. This works the same at higher resolutions - the more parallel hardware you have, the bigger the stream it can manage.

Another “super power” of the GPU is special math functions accelerated via hardware, so complicated math operations are resolved directly by the microchips instead of by software. That means extra fast trigonometrical and matrix operations - as fast as electricity can go.


References:

https://thebookofshaders.com/01/

Thursday, November 12, 2020

Mongo Aggregation Pipeline

Aggregation operations process data records and return computed results. Aggregation operations group values from multiple documents together, and can perform a variety of operations on the grouped data to return a single result. MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-reduce function, and single purpose aggregation methods.

Aggregation Pipeline¶

MongoDB’s aggregation framework is modeled on the concept of data processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into an aggregated result. For example:

db.orders.aggregate([

   { $match: { status: "A" } },

   { $group: { _id: "$cust_id", total: { $sum: "$amount" } } }

])

First Stage: The $match stage filters the documents by the status field and passes to the next stage those documents that have status equal to "A".

Second Stage: The $group stage groups the documents by the cust_id field to calculate the sum of the amount for each unique cust_id.

The pipeline provides efficient data aggregation using native operations within MongoDB, and is the preferred method for data aggregation in MongoDB.

The aggregation pipeline can operate on a sharded collection.

The aggregation pipeline can use indexes to improve its performance during some of its stages. In addition, the aggregation pipeline has an internal optimization phase. See Pipeline Operators and Indexes and Aggregation Pipeline Optimization for details.


references:

https://docs.mongodb.com/manual/aggregation/#aggregation-pipeline

Wednesday, November 11, 2020

Mongo DB Aggregate Explained

The data is like this below 


db.orders.insertMany([

   { _id: 1, cust_id: "Ant O. Knee", ord_date: new Date("2020-03-01"), price: 25, items: [ { sku: "oranges", qty: 5, price: 2.5 }, { sku: "apples", qty: 5, price: 2.5 } ], status: "A" },

   { _id: 2, cust_id: "Ant O. Knee", ord_date: new Date("2020-03-08"), price: 70, items: [ { sku: "oranges", qty: 8, price: 2.5 }, { sku: "chocolates", qty: 5, price: 10 } ], status: "A" },

   { _id: 3, cust_id: "Busby Bee", ord_date: new Date("2020-03-08"), price: 50, items: [ { sku: "oranges", qty: 10, price: 2.5 }, { sku: "pears", qty: 10, price: 2.5 } ], status: "A" },

   { _id: 4, cust_id: "Busby Bee", ord_date: new Date("2020-03-18"), price: 25, items: [ { sku: "oranges", qty: 10, price: 2.5 } ], status: "A" },

   { _id: 5, cust_id: "Busby Bee", ord_date: new Date("2020-03-19"), price: 50, items: [ { sku: "chocolates", qty: 5, price: 10 } ], status: "A"},

   { _id: 6, cust_id: "Cam Elot", ord_date: new Date("2020-03-19"), price: 35, items: [ { sku: "carrots", qty: 10, price: 1.0 }, { sku: "apples", qty: 10, price: 2.5 } ], status: "A" },

   { _id: 7, cust_id: "Cam Elot", ord_date: new Date("2020-03-20"), price: 25, items: [ { sku: "oranges", qty: 10, price: 2.5 } ], status: "A" },

   { _id: 8, cust_id: "Don Quis", ord_date: new Date("2020-03-20"), price: 75, items: [ { sku: "chocolates", qty: 5, price: 10 }, { sku: "apples", qty: 10, price: 2.5 } ], status: "A" },

   { _id: 9, cust_id: "Don Quis", ord_date: new Date("2020-03-20"), price: 55, items: [ { sku: "carrots", qty: 5, price: 1.0 }, { sku: "apples", qty: 10, price: 2.5 }, { sku: "oranges", qty: 10, price: 2.5 } ], status: "A" },

   { _id: 10, cust_id: "Don Quis", ord_date: new Date("2020-03-23"), price: 25, items: [ { sku: "oranges", qty: 10, price: 2.5 } ], status: "A" }

])


The query is like this 


db.orders.aggregate( [

   { $match: { ord_date: { $gte: new Date("2020-03-01") } } },

   { $unwind: "$items" },

   { $group: { _id: "$items.sku", qty: { $sum: "$items.qty" }, orders_ids: { $addToSet: "$_id" } }  },

   { $project: { value: { count: { $size: "$orders_ids" }, qty: "$qty", avg: { $divide: [ "$qty", { $size: "$orders_ids" } ] } } } },

   { $merge: { into: "agg_alternative_3", on: "_id", whenMatched: "replace",  whenNotMatched: "insert" } }

] )



The $match stage selects only those documents with ord_date greater than or equal to new Date("2020-03-01").


The $unwinds stage breaks down the document by the items array field to output a document for each array element. For example:


{ "_id" : 1, "cust_id" : "Ant O. Knee", "ord_date" : ISODate("2020-03-01T00:00:00Z"), "price" : 25, "items" : { "sku" : "oranges", "qty" : 5, "price" : 2.5 }, "status" : "A" }

{ "_id" : 1, "cust_id" : "Ant O. Knee", "ord_date" : ISODate("2020-03-01T00:00:00Z"), "price" : 25, "items" : { "sku" : "apples", "qty" : 5, "price" : 2.5 }, "status" : "A" }

{ "_id" : 2, "cust_id" : "Ant O. Knee", "ord_date" : ISODate("2020-03-08T00:00:00Z"), "price" : 70, "items" : { "sku" : "oranges", "qty" : 8, "price" : 2.5 }, "status" : "A" }

{ "_id" : 2, "cust_id" : "Ant O. Knee", "ord_date" : ISODate("2020-03-08T00:00:00Z"), "price" : 70, "items" : { "sku" : "chocolates", "qty" : 5, "price" : 10 }, "status" : "A" }

{ "_id" : 3, "cust_id" : "Busby Bee", "ord_date" : ISODate("2020-03-08T00:00:00Z"), "price" : 50, "items" : { "sku" : "oranges", "qty" : 10, "price" : 2.5 }, "status" : "A" }

{ "_id" : 3, "cust_id" : "Busby Bee", "ord_date" : ISODate("2020-03-08T00:00:00Z"), "price" : 50, "items" : { "sku" : "pears", "qty" : 10, "price" : 2.5 }, "status" : "A" }

{ "_id" : 4, "cust_id" : "Busby Bee", "ord_date" : ISODate("2020-03-18T00:00:00Z"), "price" : 25, "items" : { "sku" : "oranges", "qty" : 10, "price" : 2.5 }, "status" : "A" }

{ "_id" : 5, "cust_id" : "Busby Bee", "ord_date" : ISODate("2020-03-19T00:00:00Z"), "price" : 50, "items" : { "sku" : "chocolates", "qty" : 5, "price" : 10 }, "status" : "A" }

...


The $group stage groups by the items.sku, calculating for each sku:


The qty field. The qty field contains the total qty ordered per each items.sku (See $sum).

The orders_ids array. The orders_ids field contains an array of distinct order _id’s for the items.sku (See $addToSet).



mobile, 


references:

https://docs.mongodb.com/manual/tutorial/map-reduce-examples/


MongoDB aggregate, unwind redact


Test data:

db.test.insert({ "locs" : [ 

{ "name" : "a", "address" : { "type" : "Point", "coordinates" : [ 0, 0 ] } }, 

{ "name" : "b", "address" : { "type" : "Point", "coordinates" : [ 1, 1 ] } }, 

{ "name" : "c", "address" : { "type" : "Point", "coordinates" : [ 2, 2 ] } }

]})


db.test.insert({ "locs" : [ 

{ "name" : "h", "address" : { "type" : "Point", "coordinates" : [ 1.01, 1.01 ] } }

]})


db.test.ensureIndex( { "locs.address" : "2dsphere" } )



Query:


db.test.aggregate([

{ "$geoNear" : { near : { "type" : "Point", "coordinates" : [ 1, 1 ] }, distanceField: "dist.calculated", maxDistance: 5000, includeLocs: "dist.location", num: 5, limit: 200, spherical: true } },

{ "$unwind" : "$locs" },

{ "$redact" : { 

  "$cond" : { 

    if : { "$eq" : [ { "$cmp" : [ "$locs.address", "$dist.location" ] }, 0 ] },

    then : "$$KEEP", 

    else : "$$PRUNE"

   } 

 } 

}

])


The geoNear stage will output the entire documents with a "dist" field showing the distance and matching location field:


  "_id" : ObjectId("5786fa0ddeb382a191a43122"), 

  "locs" : [ { "name" : "h", "address" : { "type" : "Point", "coordinates" : [ 1.01, 1.01 ] } } ],

  "dist" : { 

    "calculated" : 0, 

    "location" : { "type" : "Point", "coordinates" : [ 1, 1 ] }

  } 

}

We $unwind the "locs" array to allow for accessing individual array elements. The dist field is preserved.


The $redact field can then be used to remove any array elements where the address does not match the location returned by the $geoNear stage.


Results:


 "_id" : ObjectId("5786fa0ddeb382a191a43121"), 

 "locs" : { "name" : "b", "address" : { "type" : "Point", "coordinates" : [ 1, 1 ] } }, 

 "dist" : { "calculated" : 0, "location" : { "type" : "Point", "coordinates" : [ 1, 1 ] } } 

}

 "_id" : ObjectId("5786fa0ddeb382a191a43122"), 

 "locs" : { "name" : "h", "address" : { "type" : "Point", "coordinates" : [ 1.01, 1.01 ] } }, 

 "dist" : { "calculated" : 1574.1651198970692, "location" : { "type" : "Point", "coordinates" : [ 1.01, 1.01 ] } } 

}

references:

https://stackoverflow.com/questions/38339995/mongodb-geonear-with-multiple-matches-in-same-document




Tuesday, November 10, 2020

Mongo DB - Combine results of two queries

db.col.aggregate([

 {

  $match: 

  {

   $or: [

    {doc_type:'item', item_id : 1001 },

    {doc_type:'company', name: 'Acer'}

   ]

  }

 },

 {

  $group: 

  {

   _id: null,

   "company_name": {$max: "$name"},

   "company_type": {$max: "$type"},

   "company_helpline": {$max: "$helpline"},

   "item_price": {$max: "$price"},

   "item_discount": {$max: "$discount"}

  }

 },

 {

  $project: 

  {

   _id: 0,

   'company' : {

    'name': '$company_name',

    'type': '$company_type',

    'helpline': '$company_helpline',

   },

   'item' : {

    'price': '$item_price',

    'discount': '$item_discount'

   }

  }

 }

]).pretty()




{

        "company" : {

                "name" : "Acer",

                "type" : "Laptops",

                "helpline" : "1800-200-000"

        },

        "item" : {

                "price" : 2000,

                "discount" : 20

        }

}

references:

https://stackoverflow.com/questions/42128733/combine-2-separate-results-mongo-db


Sails Working with queries

Queries (aka query instances) are the chainable deferred objects returned from model methods like .find() and .create(). They represent a not-quite-yet-fulfilled intent to fetch or modify records from the database.

The purpose of query instances is to provide a convenient, chainable syntax for working with your models. Methods like .populate(), .where(), and .sort() allow you to refine database calls before they're sent down the wire. Then, when you're ready to fire the query off to the database, you can just await it.

If you are using an older version of Node.js that does not support JavaScript's await keyword, you can use .exec() or .then()+.catch(). See the section on "Promises and Callbacks" below for more information.

When you execute a query using await, a lot happens.

await query;

First, the query is "shaken out" by Waterline core into a normalized query. Then it passes through the relevant Waterline adapter(s) for translation to the raw query syntax of your database(s) (e.g. Redis or Mongo commands, various SQL dialects, etc.) Next, each involved adapter uses its native Node.js database driver to send the query out over the network to the corresponding physical database.

When the adapter receives a response, it is marshalled to the Waterline interface spec and passed back up to Waterine core, where it is integrated with any other raw adapter responses into a coherent result set. At that point, it undergoes one last normalization before being passed back to "userland" (i.e. your code) for consumption by your app.


Error handling

You can use a try/catch to handle specific errors, if desired:


var zookeepersAtThisZoo;

try {

  zookeepersAtThisZoo = await Zookeeper.find({

    zoo: req.param('zoo')

  }).limit(30);

} catch (err) {

  switch (err.name) {

    case 'UsageError': return res.badRequest(err);

    default: throw err;

  }

}


return res.json(zookeepersAtThisZoo);



.fetch()


Tell Waterline (and the underlying database adapter) to send back records that were updated/destroyed/created when performing an .update(), .create(), .createEach() or .destroy() query. Otherwise, no data will be returned (or if you are using callbacks, the second argument to the .exec() callback will be undefined).


Warning: This is not recommended for update/destroy queries that affect large numbers of records.


var newUser = await User.create({ fullName: 'Alice McBailey' }).fetch();

sails.log(`Hi, ${newUser.fullName}!  Your id is ${newUser.id}.`);


.where(whereClause)


To find all the users named Finn whose email addresses start with 'f':


var users = await User.find({ name: 'Finn' })

.where({ 'emailAddress' : { startsWith : 'f' } });

return res.json(users);


references:

https://sailsjs.com/documentation/reference/waterline-orm/queries




Sunday, November 8, 2020

A component is changing an uncontrolled input of type text to be controlled error in ReactJS

The reason is, in state you defined:


this.state = { fields: {} }

fields as a blank object, so during the first rendering this.state.fields.name will be undefined, and the input field will get its value as:


value={undefined}

Because of that, the input field will become uncontrolled.


Once you enter any value in input, fields in state gets changed to:


this.state = { fields: {name: 'xyz'} }

And at that time the input field gets converted into a controlled component; that's why you are getting the error:


A component is changing an uncontrolled input of type text to be controlled.


Possible Solutions:


1- Define the fields in state as:


this.state = { fields: {name: ''} }

2- Or define the value property by using Short-circuit evaluation like this:


value={this.state.fields.name || ''}   // (undefined || '') = ''



References:

https://stackoverflow.com/questions/47012169/a-component-is-changing-an-uncontrolled-input-of-type-text-to-be-controlled-erro

Saturday, November 7, 2020

JSLint Expected '===' and instead saw '=='

Triple-equal is different to double-equal because in addition to checking whether the two sides are the same value, triple-equal also checks that they are the same data type.


So ("4" == 4) is true, whereas ("4" === 4) is false.


Triple-equal also runs slightly quicker, because JavaScript doesn't have to waste time doing any type conversions prior to giving you the answer.


JSLint is deliberately aimed at making your JavaScript code as strict as possible, with the aim of reducing obscure bugs. It highlights this sort of thing to try to get you to code in a way that forces you to respect data types.


But the good thing about JSLint is that it is just a guide. As they say on the site, it will hurt your feelings, even if you're a very good JavaScript programmer. But you shouldn't feel obliged to follow its advice. If you've read what it has to say and you understand it, but you are sure your code isn't going to break, then there's no compulsion on you to change anything.


You can even tell JSLint to ignore categories of checks if you don't want to be bombarded with warnings that you're not going to do anything about.



A quote from http://javascript.crockford.com/code.html:


=== and !== Operators.


It is almost always better to use the === and !== operators. The == and != operators do type coercion. In particular, do not use == to compare against falsy values.


JSLint is very strict, their 'webjslint.js' does not even pass their own validation.


References:

https://stackoverflow.com/questions/3735939/jslint-expected-and-instead-saw