Thursday, October 31, 2019

Does AdMob Uses UIWebView? - Related to Apple's submission guidelines

Apparently from the release notes, it seems to be so! Also, did some analysis on this using like below and it seems to be using it

MyApp below is .app file.

nm MyApp | grep UIWeb

Now looks like Apple has fixed this issue in the release that they made on Sep 10 2019, 6.8.1 version.
So, using this would let us avoid this error from happening.

https://firebase.google.com/support/release-notes/ios

references:
https://developers.google.com/admob/ios/rel-notes


How is Firestore billing done ?

There are three areas where cost is associated

Operations: All reads, writes, and deletes have a cost associated with them. Reads include queries, simple fetches, and updates through realtime listeners.
Storage :  You're charged for the data you store in Cloud Firestore, including metadata and indexes. Examples of metadata include document names, collection IDs, and field names. Index entries typically include the collection ID, the field values indexed, and the document name.
Network bandwidth: Outgoing network bandwidth (or network egress, respectively) costs include connection overhead for your Cloud Firestore requests.

factors affecting are: Daily Active Users (DAU)

https://firebase.google.com/pricing => this gives overall pricing. Having quoted a high volume such as 1M, better to go with Blaze plan. It is also possible to start with Spark or Flame during dev and then move to Blaze.

To compute the actual volume of data & operations that get billed, we might need to get an insight into the Daily Active Users (DAU) of the total install base. The below links provide samples of total read/write/function calls and its cost considering 10% of DAU. However, these may not be very precise as the data usage depend on per app and which feature is mostly used. 

https://cloud.google.com/functions/pricing
https://cloud.google.com/firestore/docs/billing-example
https://cloud.google.com/incident-response/#pricing => In alpha phase, so for free at this point

Why link AdMob App to Firebase?


Link your AdMob app(s) to Firebase so that you can use your Google Analytics for Firebase data in AdMob. By linking to Firebase, your Google Analytics for Firebase data will be available to AdMob independent of your Google Analytics for Firebase Data Sharing Settings—this allows your Analytics data to flow to AdMob to enhance product features and improve app monetization.

Below are the steps to link the AdMob app to firebase

Sign in to your AdMob account at https://apps.admob.com.
Click Apps in the sidebar.
Select the name of your app. If you don't see it in the list of recent apps, you can click View all apps to search a list of all of the apps you've added to AdMob.
Click App settings in the sidebar.
Click Link to Firebase.
Review the policy confirmation, then click Confirm.
You may be prompted to create a new project in Firebase or link to an existing Firebase project, if you've already created one. Click Continue to link your app.
Follow the on-screen instructions to integrate the Firebase SDK (Android, iOS) into your app.

references:
https://support.google.com/admob/answer/6383165

Node Install specific version

In my machine, it ended up in showing the below errors

Below are few install instructions

nvm install 6

or

nvm install stable
nvm install unstable
nvm use stable


reinstall during install
nvm install node --reinstall-packages-from=node

And finally had to use package manager n

npm install -g n   # Install n globally
n 0.10.33          # Install and use v0.10.33

sudo n 6.11.4 => installed 6.11.4 version of node.. great!!

references:
https://michael-kuehnel.de/node.js/2015/09/08/using-vm-to-switch-node-versions.html
https://stackoverflow.com/questions/7718313/how-to-change-to-an-older-version-of-node-js

Tuesday, October 29, 2019

How to turn on / off method swizzling for Firebase iOS implementation

The FCM SDK performs method swizzling in two key areas: mapping your APNs token to the FCM registration token and capturing analytics data during downstream message callback handling. Developers who prefer not to use swizzling can disable it by adding the flag FirebaseAppDelegateProxyEnabled in the app’s Info.plist file and setting it to NO (boolean value). Relevant areas of the guides provide code examples, both with and without method swizzling enabled.

With the Firebase Unity SDK on iOS, do not disable method swizzling. Swizzling is required by the SDK, and without it key Firebase features such as FCM token handling do not function properly.

references:
https://firebase.google.com/docs/cloud-messaging/ios/client

What is method Swizzling

Method swizzling is the process of changing the implementation of an existing selector. It’s a technique made possible by the fact that method invocations in Objective-C can be changed at runtime, by changing how selectors are mapped to underlying functions in a class’s dispatch table.

For example, let’s say we wanted to track how many times each view controller is presented to a user in an iOS app:

Each view controller could add tracking code to its own implementation of viewDidAppear:, but that would make for a ton of duplicated boilerplate code. Subclassing would be another possibility, but it would require subclassing UIViewController, UITableViewController, UINavigationController, and every other view controller class—an approach that would also suffer from code duplication.

Fortunately, there is another way: method swizzling from a category. Here’s how to do it:

#import

@implementation UIViewController (Tracking)

+ (void)load {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        Class class = [self class];

        SEL originalSelector = @selector(viewWillAppear:);
        SEL swizzledSelector = @selector(xxx_viewWillAppear:);

        Method originalMethod = class_getInstanceMethod(class, originalSelector);
        Method swizzledMethod = class_getInstanceMethod(class, swizzledSelector);

        // When swizzling a class method, use the following:
        // Class class = object_getClass((id)self);
        // ...
        // Method originalMethod = class_getClassMethod(class, originalSelector);
        // Method swizzledMethod = class_getClassMethod(class, swizzledSelector);

        BOOL didAddMethod =
            class_addMethod(class,
                originalSelector,
                method_getImplementation(swizzledMethod),
                method_getTypeEncoding(swizzledMethod));

        if (didAddMethod) {
            class_replaceMethod(class,
                swizzledSelector,
                method_getImplementation(originalMethod),
                method_getTypeEncoding(originalMethod));
        } else {
            method_exchangeImplementations(originalMethod, swizzledMethod);
        }
    });
}

#pragma mark - Method Swizzling

- (void)xxx_viewWillAppear:(BOOL)animated {
    [self xxx_viewWillAppear:animated];
    NSLog(@"viewWillAppear: %@", self);
}

@end

Now, when any instance of UIViewController, or one of its subclasses invokes viewWillAppear:, a log statement will print out.


references:
https://nshipster.com/method-swizzling/

Python OOPs

#declaring and instantiating class is straight forward
# Example file for working with classes
class myClass():
    def method1(self):
        print("LM")

    def method2(self, someString):
        print("Software Testing:" + someString)


def main():
    # exercise the class methods
    c = myClass()
    c.method1()
    c.method2(" Testing is fun")


if __name__ == "__main__":
    main()


#below is how inheriting a class
# Example file for working with classes
class myClass():
    def method1(self):
        print("LM")


class childClass(myClass):
    # def method1(self):
    # myClass.method1(self);
    # print ("childClass Method1")

    def method2(self):
        print("childClass method2")


def main():
    # exercise the class methods
    c2 = childClass()
    c2.method1()
    # c2.method2()


if __name__ == "__main__":
    main()


#Here is how to make use of python class constructors

class User:
    name = ""
    def __init__(self, name):
        self.name = name

    def sayHello(self):
        print("Welcome to Learning, " + self.name)

User1 = User("LM User")
User1.sayHello()


references:
https://www.guru99.com/python-tutorials.html

How to enable HTTPs on express server

To create self signed cert, below is what to be done

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./selfsigned.key -out selfsigned.crt

The below worked for me.

const express = require('express');
const https = require('https');
const fs = require('fs');
const port = 3000;

var key = fs.readFileSync(__dirname + '/../certs/selfsigned.key');
var cert = fs.readFileSync(__dirname + '/../certs/selfsigned.crt');
var options = {
  key: key,
  cert: cert
};

app = express()
app.get('/', (req, res) => {
   res.send('Now using https..');
});

var server = https.createServer(options, app);

server.listen(port, () => {
  console.log("server starting on port : " + port)
});


References:
https://stackoverflow.com/questions/11744975/enabling-https-on-express-js

How to enable HTTPS on Node JS

To create an HTTPS server, you need two things: an SSL certificate, and Node's built-in https module.

We need to start out with a word about SSL certificates. Speaking generally, there are two kinds of certificates: those signed by a 'Certificate Authority', or CA, and 'self-signed certificates'. A Certificate Authority is a trusted source for an SSL certificate, and using a certificate from a CA allows your users to be trust the identity of your website. In most cases, you would want to use a CA-signed certificate in a production environment - for testing purposes, however, a self-signed certicate will do just fine.

To generate a self-signed certificate, run the following in your shell:

openssl genrsa -out key.pem
openssl req -new -key key.pem -out csr.pem
openssl x509 -req -days 9999 -in csr.pem -signkey key.pem -out cert.pem
rm csr.pem

This should leave you with two files, cert.pem (the certificate) and key.pem (the private key). Put these files in the same directory as your Node.js server file. This is all you need for a SSL connection. So now you set up a quick hello world example (the biggest difference between https and http is the options parameter):


const https = require('https');
const fs = require('fs');

const options = {
  key: fs.readFileSync('key.pem'),
  cert: fs.readFileSync('cert.pem')
};

https.createServer(options, function (req, res) {
  res.writeHead(200);
  res.end("hello world\n");
}).listen(8000);


NODE PRO TIP: Note fs.readFileSync - unlike fs.readFile, fs.readFileSync will block the entire process until it completes. In situations like this - loading vital configuration data - the sync functions are okay. In a busy server, however, using a synchronous function during a request will force the server to deal with the requests one by one!NODE PRO TIP: Note fs.readFileSync - unlike fs.readFile, fs.readFileSync will block the entire process until it completes. In situations like this - loading vital configuration data - the sync functions are okay. In a busy server, however, using a synchronous function during a request will force the server to deal with the requests one by one!

To start your https server, run node app.js (here, app.js is name of the file) on the terminal.

Now that your server is set up and started, you should be able to get the file with curl:
curl -k https://localhost:8000

References:
https://nodejs.org/en/knowledge/HTTP/servers/how-to-create-a-HTTPS-server/

Sunday, October 27, 2019

The Ping command & 56 data bytes

The ping command sends an Internet Control Message Protocol (ICMP) ECHO_REQUEST to obtain an ICMP ECHO_RESPONSE from a host or gateway. The ping command is useful for:
Determining the status of the network and various foreign hosts.
Tracking and isolating hardware and software problems.
Testing, measuring, and managing networks.

If the host is operational and on the network, it responds to the echo. Each echo request contains an Internet Protocol (IP) and ICMP header, followed by a timeval structure, and enough bytes to fill out the packet. The default is to continuously send echo requests until an Interrupt is received (Ctrl-C).

The ping command sends one datagram per second and prints one line of output for every response received. The ping command calculates round-trip times and packet loss statistics, and displays a brief summary on completion. The ping command completes when the program times out or on receipt of a SIGINT signal. The Host parameter is either a valid host name or Internet address.

By default, the ping command will continue to send echo requests to the display until an Interrupt is received (Ctrl-C). Because of the load that continuous echo requests can place on the system, repeated requests should be used primarily for problem isolation.

Below are the various flags and options

-n Specifies numeric output only. No attempt is made to look up symbolic names for host addresses.
-r Bypasses the routing tables and sends directly to a host on an attached network. If the Host is not on a directly connected network, the ping command generates an error message. This option can be used to ping a local host through an interface that no longer has a route through it.
-s PacketSize Specifies the number of data bytes to be sent. The default is 56, which translates into 64 ICMP data bytes when combined with the 8 bytes of ICMP header data.
-src hostname/IP_add Uses the IP address as the source address in outgoing ping packets. On hosts with more than one IP address, the -src flag can be used to force the source address to be something other than the IP address of the interface on which the packet is sent. If the IP address is not one of the machine's interface addresses, an error is returned and nothing is sent.

Below are the parameters

Count : Specifies the number of echo requests to be sent (and received). This parameter is included for compatibility with previous versions of the ping command.

Below is an example which sends 5 packets

SUPER-M-91RJ:CNAServer tetuser$ ping google.com -c 5
PING google.com (216.58.199.174): 56 data bytes
64 bytes from 216.58.199.174: icmp_seq=0 ttl=49 time=25.742 ms
64 bytes from 216.58.199.174: icmp_seq=1 ttl=49 time=25.684 ms
64 bytes from 216.58.199.174: icmp_seq=2 ttl=49 time=27.228 ms
64 bytes from 216.58.199.174: icmp_seq=3 ttl=49 time=26.242 ms
64 bytes from 216.58.199.174: icmp_seq=4 ttl=49 time=26.172 ms

--- google.com ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 25.684/26.214/27.228/0.554 ms




References:
https://www.ibm.com/support/knowledgecenter/TI0003M/p8hcg/p8hcg_ping.htm

How to setup EC2 for Ping responses

When we created a new instance in aws ec2, by default aws security group block all protocol and port.

AWS security groups block ICMP (including ping, traceroute, etc.) by default. You need to explicitly enable it.

So, enable ICMP protocol (ping response) follow below steps.

    Go to EC2 Dashboard and click "Running Instances"
    on "Security Groups", select the group of your instance which you need to add security.
    click on the "Inbound" tab
    Click "Edit" Button (It will open a popup window)
    click "Add Rule"
    Select the "Custom ICMP rule - IPv4" as Type
    Select "Echo Request" as the Protocol (Port Range by default show as "N/A)
    Enter the "0.0.0.0/0" as Source
    Click "Save"


This will add the new entry. Once above configuration is done, you should be able to ping your freshly set up amazon web service EC2 instance. 


References:
https://www.serverkaka.com/2018/03/ping-aws-ec2-instance.html

How to do TCP traceroute

A good way to figure out if there is a device blocking traffic to our servers is to run a telnet to a specific port. This won't always tell you what exactly is interfering with the traffic but will let you know that something in the path is dropping the traffic.  It's also an integral part of how OpenDNS support does troubleshooting and you may be asked by support personnel to perform these steps.

A TCP "traceroute" run to a domain on a specific port should give a good idea as to where the traffic is being dropped.  A traceroute simply shows the 'path' on the Internet between the host where the traceroute is run and the destination that's specified as well as where, if anywhere, the route is failing to complete.  For instance, if you cannot reach foo.com from your computer, you can use tcptraceroute to map the path between your computer and the public website to see where along the line the problem is occurring. 



References:
https://support.opendns.com/hc/en-us/articles/227989007-How-to-Running-a-TCP-Traceroute

What is /var directory in Linux

/var is a standard subdirectory of the root directory in Linux and other Unix-like operating systems that contains files to which the system writes data during the course of its operation.
The root directory is the directory that contains all other directories and files on a system and which is designated by a forward slash ( / ). Among the other directories that are usually installed by default in the root directory are /bin, /boot, /dev, /etc, /home, /initrd, /lib, /lost+found, /misc, /mnt, /opt, /proc, /root, /sbin, /tmp and /usr.

var is specific for each computer; that is, it is not shared over a network with other computers, in contrast to many other high-level directories. Its contents are not included in /usr because situations can occur in which it is desired to mount /usr as read-only, such as when it is on a CDROM or on another computer. /usr, which is generally the largest directory (at least on a newly installed system) and is used to store application programs, should only contain static data.

Among the various subdirectories within /var are /var/cache (contains cached data from application programs), /var/games (contains variable data relating to games in /usr), /var/lib (contains dynamic data libraries and files), /var/lock (contains lock files created by programs to indicate that they are using a particular file or device), /var/log (contains log files), /var/run (contains PIDs and other system information that is valid until the system is booted again) and /var/spool (contains mail, news and printer queues).


References:
http://www.linfo.org/var.html


While installing packages, which is the directory best?

If the application has a makefile, or for example for python apps if the application uses distutils (e.g., has a setup.py file), or a similar build/install system, you should install it into /usr/local/. This is often the default behavior.

From what I understand, /usr/local/ has a hierarchy that is similar to /usr/. However, directories like /usr/bin/ and /usr/lib/ are usually reserved for packages install via apt. So a program expecting to get "installed" into /usr/ should work fine in /usr/local/.

If you just need to extract a tarball and run directly (e.g. Firefox) then put it into /opt/. A program that just needs one directory and will get all files/libraries relative to that directory can get one directory for itself in /opt/.

Another suggestion is,

Install unstable programs like firefox devel in /home/user/opt/ makes it a lot easier to remove, and no confusion for other users as to what version they should use... So if it is not a program for global use, install it in a subfolder in your home directory.


Never install programs in /usr/, it is likely to cause chaos, things installed in /usr/ is meant to be for distribution packages only. /usr/local/ is for packages locally compiled. And the srtucture works in exactly the same way! files in /usr/local/ will be prioritized over files in /usr/

/opt/ should be used for installation of pre-compiled (binary) packages (Thunderbird, Eclipse, Netbeans, IBM NetSphere, etc) and the like. But if they are only for a single user they should be put in your home directory.

If you want to be able to run a program installed in a "weird" location (like /home/user/opt/firefox/) without typing the whole path you need to add it to your $PATH variable, you can do this be adding a line like this in your /home/user/.profile

export PATH=/home/user/opt/firefox:$PATH

The folder name should be the one where the executable file you need to run is located.

References:
https://askubuntu.com/questions/1148/when-installing-user-applications-where-do-best-practices-suggest-they-be-loc



What is Lawful Interception

Lawful Interception (LI) is one of the regulatory requirements operators must satisfy as a legal obligation toward the Law Enforcement Agencies (LEA) and Government Authorities in most countries where they are operating their businesses. Within 3GPP standards, this is currently defined as: Laws of individual nations and regional institutions (e.g. European Union), and sometimes licensing and operating conditions define a need to intercept telecommunications traffic and related information in modern telecommunications systems. It has to be noted that lawful interception shall always be done in accordance with the applicable national or regional laws and technical regulations (as per 3GPP TS 33.106 “Lawful Interception Requirements"

LI allows appropriate authorities to perform interception of communication traffic for specific user(s) and this includes activation (requiring a legal document such as a warrant), deactivation, interrogation, and invocation procedures. A single user (i.e. interception subject) may be involved where interception is being performed by different LEAs. In such scenarios, it must be possible to maintain strict separation of these interception measures. The Intercept Function is only accessible by authorized personnel. As LI has regional jurisdiction, national regulations may define specific requirements on how to handle the user’s location and interception across boundaries

The process of collection of information is done by means of adding specific functions into the network entities where certain trigger conditions will then cause these network elements to send data in a secure manner to a specific network entity responsible for such a role. Moreover, specific entities provide administration and delivery of intercepted data to Law Enforcement in the required format.


How it is done on a packet data network ? That is using Gateways

The packet data network gateway (PDN-GW) ensures connectivity to the UE to outer packet data networks, as it represents the EPC’s point of contact with the external world. A mobile could have simultaneous connectivity with several PDN-GWs for accessing multiple packet data networks. The PDN-GW is responsible for several functions such as packet filtering, charging support, policy enforcement, packet screening, IP address allocation for the UE, QoS enforcement and lawful interception. The PDN-GW acts also as the anchor for mobility between 3GPP and non-3GPP technologies such as WiMAX [13].


References:
https://www.sciencedirect.com/topics/computer-science/lawful-interception


Which Version of python is good to learn

Now, in 2018, it’s more of a no-brainer: Python 3 is the clear winner for new learners

Bit of history

Python 2.0 was first released in 2000. Its latest version, 2.7, was released in 2010.
Python 3.0 was released in 2008. Its newest version, 3.6, was released in 2016, and version 3.7 is currently in development.
Although Python 2.7 is still widely used, Python 3 adoption is growing quickly. In 2016, 71.9% of projects used Python 2.7, but by 2017, it had fallen to 63.7%. This signals that the programming community is turning to Python 3–albeit gradually–when developing real-world applications.
Notably, on January 1, 2018, Python 2.7 will “retire” and no longer be maintained

Python2 vs Python 3

- Py2 is still in softwares of certain compnies while Py3 is going to take storm by 2020
- many older libs built for Py2 are not forward compatible. While many developers creating libs today are not backward compatible.
- strings are stored in ascii by default while stored as utf8 in py3
- floats are rounded to calculation to whole number, while it will be float itself in Py3
- py2 print statement is print "hello", in py3, it is print("hello")


Why to move to Py3?

- Py is traditionally a typed language by py3 support typing which removes development conflicts when working with new piece of code
- Py3 has faster runtime
- better community support

While Py2 still could be useful in below scenarios

- If you want to become a DevOps engineer and work with configuration management tools like Fabric or Ansible, you might have to work with both Python 2 and 3 (because parts of these libraries don’t have full Python 3 support).
- If your company has legacy code written in Python 2, you’ll need to learn to work with that.
- If you have a project that depends on certain third-party software or libraries that can't be ported to Python 3, you'll have no choice but to use Python 2 for it.

References:
https://learntocodewith.me/programming/python/python-2-vs-python-3/#history-of-python2-vs-3


Configuring Port with an SSL cert

In Windows Server 2003 or Windows XP, use the HttpCfg.exe tool in "set" mode on the Secure Sockets Layer (SSL) store to bind the certificate to a port number. The tool uses the thumbprint to identify the certificate, as shown in the following example.

httpcfg set ssl -i 0.0.0.0:8012 -h 0000000000003ed9cd0c315bbb6dc1c08da5e6 

The -i switch has the syntax of IP:port and instructs the tool to set the certificate to port 8012 of the computer. Optionally, the four zeroes that precede the number can also be replaced by the actual IP address of the computer.

The -h switch specifies the thumbprint of the certificate.

netsh http add sslcert ipport=0.0.0.0:8000 certhash=0000000000003ed9cd0c315bbb6dc1c08da5e6 appid={00112233-4455-6677-8899-AABBCCDDEEFF}

The certhash parameter specifies the thumbprint of the certificate.
The ipport parameter specifies the IP address and port, and functions just like the -i switch of the Httpcfg.exe tool described.
The appid parameter is a GUID that can be used to identify the owning application



References:
https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-a-port-with-an-ssl-certificate

Can a single SSL server certificate cover multiple ports per domain name?

Yes, a single SSL server certificate can cover multiple ports for the same domain name. As an example, the certificate for myserver.mydomain.com will work for:
https://myserver.mydomain.com; and
https://myserver.mydomain.com:8888

A specific port number should not be specified in the CN (Common Name) field. If a port number is included as part of the CN (Common Name) then our system will not accept the Certificate Signing Request (CSR) as valid.



References:
https://support.comodo.com/index.php?/Knowledgebase/Article/View/326/17/can-a-single-ssl-server-certificate-cover-multiple-ports-per-domain-name



How to uninstall Apache on Cent OS

Below are the packages to be uninstalled

httpd
httpd-tools
apr
apr-util

Below command can remove these

It also mentions IP table changes, but this was not required in any case adding this as well here

yum erase httpd httpd-tools apr apr-util

Try the above with sudo if it mandates, otherwise the above need to be run as root.

References:
https://www.cyberciti.biz/faq/uninstall-apache-redhat-centos-rhel-fedora-linux-command/

Why RTP sequence numbers are to be randomised?

Sequence number: (16 bits) The sequence number is incremented for each RTP data packet sent and is to be used by the receiver to detect packet loss[1] and to accommodate out-of-order delivery. The initial value of the sequence number should be randomized to make known-plaintext attacks on Secure Real-time Transport Protocol more difficult.[13]:13

The known-plaintext attack (KPA) is an attack model for cryptanalysis where the attacker has access to both the plaintext, and its encrypted version (ciphertext). These can be used to reveal further secret information such as secret keys and code books. The term "crib" originated at Bletchley Park, the British World War II decryption operation.


References:
https://en.wikipedia.org/wiki/Real-time_Transport_Protocol

Python TK and TTK widgets

The widgets in tkinter are highly and easily configurable. You have almost complete control over how they look - border widths, fonts, images, colors, etc.

ttk widgets use styles to define how they look, so it takes a bit more work if you want a non-standard button. ttk widgets are also a little under-documented. Understanding the underlying theme and layout engines (layout within the widgets themselves, not pack, grid and place) is a challenge.

Generally speaking, the themed widgets will give you an application that looks more "native", but at the expense of a loss of configurability.
My advice is to use ttk widgets if you want your GUI to look a little more modern, and the tkinter widgets if you need a bit more configurability. You can use them both in the same applications

References:
https://stackoverflow.com/questions/19561727/what-is-the-difference-between-the-widgets-of-tkinter-and-tkinter-ttk-in-python


What is Python PAGE?

PAGE is a cross-platform drag-and-drop GUI generator, bearing a resemblance to Visual Basic. It allows one to easily create GUI windows containing a selection of Tk and ttk widgets. Required are Tcl/Tk 8.6 and Python 2.7+. I am actually using Tcl/Tk 8.6 and Python 3.7. I am no longer responding to problem related to Python 2. PAGE springs from Virtual Tcl, a Tcl/Tk program, forked to generate Python modules that realize the desired GUI. Tcl is required for running PAGE but is not required for executing the generated Python code.


PAGE is not an end-all, be-all tool, but rather one that attempts to ease the burden on the Python programmer. It is aimed at the user who will put up with a less than completely general GUI capability in order to get an easily generated GUI. A helper and learning tool, it does not build an entire application but rather is aimed at building a single GUI class and the boiler plate code in Python necessary for getting the GUI on the screen.

Tk Widgets supported:

Toplevel
Button
Canvas
Checkbutton
Entry
Frame
Label
Labelframe
Listbox
Message
Popupmenu
Radiobutton
Scale
Spinbox
Text
as well as the following ttk widgets:
TButton
TCheckbutton
TCombobox
TEntry
TFrame
TLabel
TLabelframe
TNotebook
TPanedwindow
TProgressbar
TRadiobutton
TScale
TSeparator
TSizegrip



References:
http://page.sourceforge.net/

Python GUI Libraries



TKinter

Tkinter is a toolkit that can form GUI with Python.
This allows you to run Python scripts in GUI format.
The URL is Tkinter’s tutorial page.

Flexx

Many Python GUI libraries are based on libraries written in other languages such as “C ++” like “wxWidgets” “libavg”.
Flexx is created in Python.
Using Web technology, if it has Python and browser anywhere works.

CEF Python
This framework targets Windows, MAC OS, and Linux. It is based on Google Chromium. Its focus is largely on the facilitation of embedded browser use in third-party applications

Dabo
The target of this framework is WxPython.
This is a 3-tier framework. Dabo is a cross-platform application development framework.


Kivy
Kivy is based on OpenGL ES 2.
It has a native multi-touch for every single platform.
This framework is event-driven.
It is based around the main loop.
It is extremely suitable for developing games.

Pyforms
Pyforms is a Python 2.7/3.x cross-environment framework used to develop GUI application.
Code reusability is encouraged in this framework.

PyGObject
With PyGObject, you can write Python applications for the GNOME project.
You can write a Python application using GTK+ as well.

PyQt
Qt is a cross-platform framework. It is written in C++. It is a very comprehensive library. It includes many tools and APIs. It is widely used in many industries. It covers a lot of platforms.

PySide
There is an application / user interface (UI) framework written in “C ++” language called Qt (cute).
“PySide” is a wrapper of “Qt”.
The difference with PySide is that PyQt is commercially available.


PyGUI
PyGUI targets Unix, Macintosh, and Windows platforms.
The focus of this MVC framework is to fit into the Python ecosystem with as much ease as possible.

libavg
It is a third-party library.
It is written in C++. It has been scripted from Python.
It has the following properties:
Display elements in the form of Python variables
Event handling system
Timers
Support for logging

PyGTK | PyGObject

“GTK + “, which is commonly used in Linux, is a” GTK + “wrapper of” PyGTK “.
Compared to Kivy and PyQt, PyGUI is fairly easy for Unix, Macintosh, Windows platforms.
The MVC framework developed by Dr. Greg Ewing of Canterbury University in New Zealand focuses on conforming to the Python ecosystem as easily as possible.


wxPython
There is a cross-platform GUI toolkit written in “C ++” called “wxWidgets” wxPython is its binding.


Referecences
https://medium.com/issuehunt/13-python-gui-libraries-a6196dfb694


What are applications of Python?

Web and Internet Development

Frameworks such as Django and Pyramid.
Micro-frameworks such as Flask and Bottle.
Advanced content management systems such as Plone and django CMS.

Python's standard library supports many Internet protocols:

HTML and XML
JSON
E-mail processing.
Support for FTP, IMAP, and other Internet protocols.
Easy-to-use socket interface.

And the Package Index has yet more libraries:

Requests, a powerful HTTP client library.
BeautifulSoup, an HTML parser that can handle all sorts of oddball HTML.
Feedparser for parsing RSS/Atom feeds.
Paramiko, implementing the SSH2 protocol.
Twisted Python, a framework for asynchronous network programming.

Scientific and Numeric

SciPy is a collection of packages for mathematics, science, and engineering.
Pandas is a data analysis and modeling library.
IPython is a powerful interactive shell that features easy editing and recording of a work session, and supports visualizations and parallel computing.
The Software Carpentry Course teaches basic skills for scientific computing, running bootcamps and providing open-access teaching materials.

Learning environments
Since Python is an interpreted language, all one needs to start programming is a terminal window. However, for your students, this would not be the friendliest environment; instead, we recommend that you use something like IDLE (which stands for Integrated DeveLopment Environment), which is included in the installation Python files on any platform that supports Tcl, including Windows.

As for yourself, if you prefer programming directly from a terminal window, a better choice than the default interpreter might be IPython.

In addition to IDLE, there are a number of third party tools which you can find out by referring to the Python Editors Wiki and the Python Integrated Development Environments Wiki.

Desktop GUIs
The Tk GUI library is included with most binary distributions of Python.

Some toolkits that are usable on several platforms are available separately:

wxWidgets
Kivy, for writing multitouch applications.
Qt via pyqt or pyside
Platform-specific toolkits are also available:


References:
https://www.python.org/about/apps/

What is IDLE


IDLE (short for Integrated DeveLopment Environment[1][2] or Integrated Development and Learning Environment[3]) is an integrated development environment for Python, which has been bundled with the default implementation of the language since 1.5.2b1.[4][5] It is packaged as an optional part of the Python packaging with many Linux distributions. It is completely written in Python and the Tkinter GUI toolkit (wrapper functions for Tcl/Tk).

IDLE is intended to be a simple IDE and suitable for beginners, especially in an educational environment. To that end, it is cross-platform, and avoids feature clutter.

According to the included README, its main features are:

Multi-window text editor with syntax highlighting, autocompletion, smart indent and other.
Python shell with syntax highlighting.
Integrated debugger with stepping, persistent breakpoints, and call stack visibility.
IDLE has been criticized for various usability issues, including losing focus, lack of copying to clipboard feature, lack of line numbering options, and general user interface design; it has been called a "disposable" IDE, because users frequently move on to a more advanced IDE as they gain experience.[6]

Author Guido van Rossum says IDLE stands for "Integrated DeveLopment Environment",[1][7] and since Van Rossum named the language Python partly to honor British comedy group Monty Python, the name IDLE was probably also chosen partly to honor Eric Idle, one of Monty Python's founding members.[8][9]


References:
https://en.wikipedia.org/wiki/IDLE


Python, PyCharm - creating and running first project



References:
https://www.jetbrains.com/help/pycharm/creating-and-running-your-first-python-project.html

Python : What is difference Between VirtualEnv and Conda



Conda environments are essentially a replacement for virtualenv. Both conda environments and virtualenv are aimed at creating an “environment” with isolated package installs. They aim to solve the problem of having multiple python projects on the same system with conflicting package requirements. If you have project A that requires tensorflow-1.8 and project B that uses tensorflow-1.9, these environments allow you to have the right version installed for each project.

The main difference between the two is that conda is a bit more full featured/”magic”. Conda has dedicated syntax for creating environments and installing packages, and can also manage installing different versions of python too. For virtualenv, you just activate the environment and then use all the normal commands.

For example, for conda the command for a new environment named “myenv” using python 3.4, scipy version 0.15.0, and packages astroid and babel of any version:

conda create -n myenv python=3.4 scipy=0.15.0 astroid babel



virtualenv --python=/usr/bin/python3.4 myenv
source myenv/bin/activate
pip install scipy==0.15.0
pip install astroid
pip install babel



References:
https://www.quora.com/What-is-the-difference-between-python-virtualenv-and-a-conda-environment


What is Node WebRTC

node-webrtc is a Node.js Native Addon that provides bindings to WebRTC M74. This project aims for spec-compliance and is tested using the W3C's web-platform-tests project. A number of nonstandard APIs for testing are also included.


nstalling from NPM downloads a prebuilt binary for your operating system × architecture. Set the TARGET_ARCH environment variable to "arm" or "arm64" to download for armv7l or arm64, respectively. Linux and macOS users can also set the DEBUG environment variable to download debug builds.


References:
https://github.com/node-webrtc/node-webrtc


Node WebRTC - Running Examples


The project said to be containing the below

This project presents a few example applications using node-webrtc.

    audio-video-loopback: relays incoming audio and video using RTCRtpTransceivers.
    ping-pong: simple RTCDataChannel ping/pong example.
    pitch-detector: pitch detector implemented using RTCAudioSink and RTCDataChannel.
    sine-wave: generates a sine wave using RTCAudioSource; frequency control implemented using RTCDataChannel.
    sine-wave-stereo: generates a stereo sine wave using RTCAudioSource; panning control implemented using RTCDataChannel.
    video-compositing: uses RTCVideoSink, node-canvas, and RTCVideoSource to draw spinning text on top of an incoming video.


The below were the examples in this

audio-video-loopback
This example simply relays incoming audio and video using RTCRtpTransceivers.


ping-pong
This example sends a “ping” from the client over an RTCDataChannel. Upon receipt, node-webrtc responds with a “pong”. Open the Console to see the pings and pongs…

pitch-detector
This example uses node-webrtc’s RTCAudioSink to implement simple pitch detection server-side. The client generates a sine wave, and the server communicates the pitch it detects using RTCDataChannel. Use the number input to change the frequency of the client-generated sine wave

datachannel-buffer-limits
This example sends a given amount of data from the client over an RTCDataChannel. Upon receipt, node-webrtc responds by sending the data back to the client. Data is chunked into pieces, you can adjust both the chunk size and global size to test the outbound buffer limits of the RTCDataChannel.

sine-wave
This example uses node-webrtc’s RTCAudioSource to generate a sine wave server-side. Use the number input to change the frequency of the server-generated sine wave. Frequency changes are sent to the server using RTCDataChannel. Finally, pitch is detected client-side and displayed alongside the received waveform.

sine-wave-stereo
This example uses node-webrtc’s RTCAudioSource to generate a stereo sine wave server-side. Use the number input to change the panning of the server-generated sine wave. Panning changes are sent to the server using RTCDataChannel.

video-compositing
This example uses node-webrtc’s RTCVideoSource and RTCVideoSink along with node-canvas to superimpose a spinning, colorful animation on top of the incoming video.

Next step would be to install this on a web server and give a try! Before that see what all this does really does behind the scenes

References:
https://github.com/node-webrtc/node-webrtc-examples

Python Flow Control If/else

Conditional Statement in Python perform different computations or actions depending on whether a specific Boolean constraint evaluates to true or false. Conditional statements are handled by IF statements in Python.

#
#Example file for working with conditional statement
#
def main():
x,y =2,8

if(x < y):
st= "x is less than y"
print(st)

if __name__ == "__main__":
main()


In this above, if the condition is not met, then st becomes undefined and the run time will give error.

Below is for elif

#
#Example file for working with conditional statement
#
def main():
x,y =8,8

if(x < y):
st= "x is less than y"

elif (x == y):
st= "x is same as y"

else:
st="x is greater than y"
print(st)

if __name__ == "__main__":
main()


A short form like ternary operator is like below

def main():
x,y = 10,8
st = "x is less than y" if (x < y) else "x is greater than or equal to y"
print(st)

if __name__ == "__main__":
main()


Python doesn't have switch statement. Python uses dictionary mapping to implement switch statement in Python

def SwitchExample(argument):
    switcher = {
        0: " This is Case Zero ",
        1: " This is Case One ",
        2: " This is Case Two ",
    }
    return switcher.get(argument, "nothing")


if __name__ == "__main__":
    argument = 1
    print (SwitchExample(argument))



References
https://www.guru99.com/if-loop-python-conditional-structures.html

Python: Function Calls

Function in Python is defined by the "def " statement followed by the function name and parentheses ( () )

There are set of rules in Python to define a function.

Any args or input parameters should be placed within these parentheses
The function first statement can be an optional statement- docstring or the documentation string of the function
The code within every function starts with a colon (:) and should be indented (space)
The statement return (expression) exits a function, optionally passing back a value to the caller. A return statement with no args is the same as return None.

Python follows a particular style of indentation to define the code, since Python functions don't have any explicit begin or end like curly braces to indicate the start and stop for the function, they have to rely on this indentation. Here we take a simple example with "print" command. When we write "print" function right below the def func 1 (): It will show an "indentation error: expected an indented block".

At least, one indent is enough to make your code work successfully. But as a best practice it is advisable to leave about 3-4 indent to call your function.

It is also necessary that while declaring indentation, you have to maintain the same indent for the rest of your code. For example, in below screen shot when we call another statement "still in func1" and when it is not declared right below the first print statement it will show an indentation error "unindent does not match any other indentation level."


Return statement is as usual.

Argument can have default value also. Like

def multiply(x,y):
   print(x*y);

multiply(2,8);

Great that we can also change the order of the variable in a function
 You can also change the order in which the arguments can be passed in Python. Here we have reversed the order of the value x and y to x=4 and y=2.


Below is how a variable argument function can be declared

def multiply(*args)
  print(args)

multiply(1,2,3,4)

References:
https://www.guru99.com/functions-in-python.html

What are Goals of SRTP

The security goals for SRTP are to ensure:

   - the confidentiality of the RTP and RTCP payloads, and

   - the integrity of the entire RTP and RTCP packets, together with
      protection against replayed packets.

   These security services are optional and independent from each other,
   except that SRTCP integrity protection is mandatory (malicious or
   erroneous alteration of RTCP messages could otherwise disrupt the
   processing of the RTP stream).

   Other, functional, goals for the protocol are:

   - a framework that permits upgrading with new cryptographic
      transforms,

   -  low bandwidth cost, i.e., a framework preserving RTP header
      compression efficiency,

   and, asserted by the pre-defined transforms:

   -  a low computational cost,

   -  a small footprint (i.e., small code size and data memory for
      keying information and replay lists),

   -  limited packet expansion to support the bandwidth economy goal,

   -  independence from the underlying transport, network, and physical
      layers used by RTP, in particular high tolerance to packet loss
      and re-ordering.

   These properties ensure that SRTP is a suitable protection scheme for
   RTP/RTCP in both wired and wireless scenarios.


References:
https://tools.ietf.org/html/rfc3711

DTLS and SRTP


DTLS is utilized to establish the keys that are then used for securing the RTP stream. Once the keys are established, they are used to encrypt the RTP stream to make it SRTP(nothing special about the encryption, standard SRTP rfc3711) and then sent over that DTLS channel. If you read rfc5764, you can get more specifics about what a DTLS channel is and demultiplexing the packets, etc.

So, DTLS is key MANAGEMENT for the SRTP exchange. See rfc5764 section 4.1 for a little example.

In summary: if by SRTP over a DTLS connection you mean once keys have been exchanged and encrypting the media with those keys, there is not much difference. The main difference is that with DTLS-SRTP, the DTLS negotiation occurs on the same ports as the media itself and thus packet demultiplexing must be taken into account over those ports.

References:
https://stackoverflow.com/questions/31421909/difference-between-dtls-srtp-and-srtp-packets-send-over-dtls-connections
https://tools.ietf.org/html/rfc5764#section-4.1

Python Operators

Below is the summary

There are various methods for arithmetic calculation in Python as you can use the eval function, declare variable & calculate, or call functions
Comparison operators often referred as relational operators are used to compare the values on either side of them and determine the relation between them
Python assignment operators are simply to assign the value to variable
Python also allows you to use a compound assignment operator, in a complicated arithmetic calculation, where you can assign the result of one operand to the other
For AND operator – It returns TRUE if both the operands (right side and left side) are true
For OR operator- It returns TRUE if either of the operand (right side or left side) is true
For NOT operator- returns TRUE if operand is false
There are two membership operators that are used in Python. (in, not in).
It gives the result based on the variable present in specified sequence or string
The two identify operators used in Python are (is, is not)
It returns true if two variables point the same object and false otherwise
Precedence operator can be useful when you have to set priority for which calculation need to be done first in a complex calculation.

References:
https://www.guru99.com/python-operators-complete-tutorial.html

What is SRTP ?

SRTP (Secure Real-Time Transport Protocol or Secure RTP) is an extension to RTP (Real-Time Transport Protocol) that incorporates enhanced security features. Like RTP, it is intended particularly for VoIP (Voice over IP) communications.

SRTP was conceived and developed by communications experts from Cisco and Ericsson and was formally published in March 2004 by the Internet Engineering Task Force ( IETF ) as Request for Comments (RFC) 3711. SRTP uses encryption and authentication to minimize the risk of denial of service( DoS ) attacks. SRTP can achieve high throughput in diverse communications environments that include both hard-wired and wireless devices. Provisions are included that allow for future improvements and extensions.


References:
https://whatis.techtarget.com/definition/SRTP-Secure-Real-Time-Transport-Protocol-or-Secure-RTP

Python Dictionary

Dictionaries are another example of a data structure. A dictionary is used to map or associate things you want to store the keys you need to get them. A dictionary in Python is just like a dictionary in the real world. Python Dictionary are defined into two elements Keys and Values.

Keys will be a single element
Values can be a list or list within a list, numbers, etc.

Below is how to declare a dictionary in Py

Dict = { ' Tim': 18,  xyz,.. }

Properties of Dictionary Keys

There are two important points while using dictionary keys

More than one entry per key is not allowed ( no duplicate key is allowed)
The values in the dictionary can be of any type while the keys must be immutable like numbers, tuples or strings.
Dictionary keys are case sensitive- Same key name but with the different case are treated as different keys in Python dictionaries.

Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
print((Dict['Tiffany']))

Copying dictionary
You can also copy the entire dictionary to new dictionary. For example, here we have copied our original dictionary to new dictionary name "Boys" and "Girls".

Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
Boys = {'Tim': 18,'Charlie':12,'Robert':25}
Girls = {'Tiffany':22}
studentX=Boys.copy()
studentY=Girls.copy()
print(studentX)
print(studentY)


Updating Dictionary
You can also update a dictionary by adding a new entry or a key-value pair to an existing entry or by deleting an existing entry. Here in the example we will add another name "Sarah" to our existing dictionary.

Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
Dict.update({"Sarah":9})
print(Dict)

Delete Keys from the dictionary
Python dictionary gives you the liberty to delete any element from the dictionary list. Suppose you don't want the name Charlie in the list, so you can delete the key element by following code.

Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
del Dict ['Charlie']
print(Dict)

Getting items from dict

Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
print("Students Name: %s" % list(Dict.items()))

We use the code items() method for our Dict.
When code was executed, it returns a list of items ( keys and values) from the dictionary

Check if a given key already exists in a dictionary

For a given list, you can also check whether our child dictionary exists in a main dictionary or not. Here we have two sub-dictionaries "Boys" and "Girls", now we want to check whether our dictionary Boys exist in our main "Dict" or not. For that, we use the forloop method with else if method.

Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
Boys = {'Tim': 18,'Charlie':12,'Robert':25}
Girls = {'Tiffany':22}
for key in list(Dict.keys()):
    if key in list(Boys.keys()):
        print(True)
    else:     
        print(False)

Sorting the Dictionary
Dict = {'Tim': 18,'Charlie':12,'Tiffany':22,'Robert':25}
Boys = {'Tim': 18,'Charlie':12,'Robert':25}
Girls = {'Tiffany':22}
Students = list(Dict.keys())
Students.sort()
for S in Students:
      print(":".join((S,str(Dict[S]))))

We declared the variable students for our dictionary "Dict."
Then we use the code Students.sort, which will sort the element inside our dictionary
But to sort each element in dictionary, we run the forloop by declaring variable S
Now, when we execute the code the forloop will call each element from the dictionary, and it will print the string and value in an order

Below are the operations allowed on Dict

copy() Copy the entire dictionary to new dictionary dict.copy()
update() Update a dictionary by adding a new entry or a key-value pair to an
existing entry or by deleting an existing entry. Dict.update([other])
items() Returns a list of tuple pairs (Keys, Value) in the dictionary. dictionary.items()
sort() You can sort the elements dictionary.sort()
len() Gives the number of pairs in the dictionary. len(dict)
cmp() Compare values and keys of two dictionaries cmp(dict1, dict2)
Str() Make a dictionary into a printable string format Str(dict)

References:
https://www.guru99.com/python-dictionary-beginners-tutorial.html

Tuples in Python:

Python has tuple assignment feature which enables you to assign more than one variable at a time. In here, we have assigned tuple 1 with the persons information like name, surname, birth year, etc. and another tuple 2 with the values in it like number (1,2,3,….,7).


tup1 = ('Robert', 'Carlos','1965','Terminator 1995', 'Actor','Florida');
tup2 = (1,2,3,4,5,6,7);
print(tup1[0])
print(tup2[1:4])


Packing and Unpacking

x = ("Guru99", 20, "Education")    # tuple packing
(company, emp, profile) = x    # tuple unpacking
print(company)
print(emp)
print(profile)


Comparing tuples

A comparison operator in Python can work with tuples.
The comparison starts with a first element of each tuple. If they do not compare to =,< or > then it proceed to the second element and so on.

a=(5,6)
b=(1,4)
if (a>b):print("a is bigger")
else: print("b is bigger")

Using tuples as keys in dictionaries


Since tuples are hashable, and list is not, we must use tuple as the key if we need to create a composite key to use in a dictionary.

directory[last,first] = number

We would come across a composite key if we need to create a telephone directory that maps, first-name, last-name, pairs of telephone numbers, etc. Assuming that we have declared the variables as last and first number, we could write a dictionary assignment statement as shown below:

Inside the brackets, the expression is a tuple. We could use tuple assignment in a for loop to navigate this dictionary.

for last, first in directory:
print first, last, directory[last, first]

Deleting Tuples

Tuples are immutable and cannot be deleted. You cannot delete or remove items from a tuple. But deleting tuple entirely is possible by using the keyword del

Slicing of Tuple

To fetch specific sets of sub-elements from tuple or list, we use this unique function called slicing. Slicing is not only applicable to tuple but also for array and list.

x = ("a", "b","c", "d", "e")
print(x[2:4])

The output of this code will be ('c', 'd').

There are few built in methods for tuple and they are:

all(), any(), enumerate(), max(), min(), sorted(), len(), tuple()

Advantages of Tuple over List

Iterating through tuple is faster than with list, since tuples are immutable.
Tuples that consist of immutable elements can be used as key for dictionary, which is not possible with list
If you have data that is immutable, implementing it as tuple will guarantee that it remains write-protected



References:
https://www.guru99.com/python-tuples-tutorial-comparing-deleting-slicing-keys-unpacking.html

Python : Strings data structure


Python does not support a character type, these are treated as strings of length one, also considered as substring.

var1 = "Guru99!"
var2 = "Software Testing"
print ("var1[0]:",var1[0])
print ("var2[1:5]:",var2[1:5])

Now there are various string operators

[] => Slice- it gives the letter from the given index. a[1] will give "u" from the word Guru as such ( 0=G, 1=u, 2=r and 3=u)

[ : ] => Range slice-it gives the characters from the given range. x [1:3] it will give "ur" from the word Guru. Remember it will not consider 0 which is G, it will consider word after that is ur.

in => Membership-returns true if a letter exist in the given string

x="Guru"
print "u" in x

u is present in word Guru and hence it will give 1 (True)

not in => Membership-returns true if a letter exist is not in the given string

x="Guru"
print "l" not in x

l not present in word Guru and hence it will give 1

r/R => Raw string suppresses actual meaning of escape characters. Print r'\n' prints \n and print R'/n' prints \n

% - Used for string format %r - It insert the canonical string representation of the object (i.e., repr(o)) %s- It insert the presentation string representation of the object (i.e., str(o)) %d- it will format a number for display

+ It concatenates 2 strings

x="Guru"
y="99"
print x+y

* => Repeat

x="Guru"
y="99"
print x*2


Replace method

oldstring = 'I like Guru99'
newstring = oldstring.replace('like', 'love')
print(newstring)

Change upper and lower case string

string="python at guru99"
print(string.upper())

Capitalize the string also can be done below

string="python at guru99"
print(string.capitalize())

Lower casing can be done like below

string="PYTHON AT GURU99"
print(string.lower())

Concatenating by Joining
print(":".join("Python"))

Reverse a string is also a functon!

string="12345"
print(''.join(reversed(string)))

Splitting the string

word="guru99 career guru99"
print(word.split(' '))

word="guru99 career guru99"
print(word.split('r'))


Strings are immutable !!

x = "Guru99"
x.replace("Guru99","Python")
print(x)


References:
https://www.guru99.com/learning-python-strings-replace-join-split-reverse.html

Python : Variables


A Python variable is a reserved memory location to store values. In other words, a variable in a python program gives data to the computer for processing.

Every value in Python has a datatype. Different data types in Python are Numbers, List, Tuple, Strings, Dictionary, etc. Variables can be declared by any name or even alphabets like a, aa, abc, etc.


Example

a=100
print a


Redeclaring a variable - Crazy!!

# Declare a variable and initialize it
f = 0
print(f)
# re-declaring the variable works
f = 'guru99'
print(f)

Concatenating String

a="Guru"
b = 99
print a+b

The above will give error because the data type of B is int str(b) to be done to declare it as String

Global vs local scope variables

# Declare a variable and initialize it
f = 101
print(f)
# Global vs. local variables in functions
def someFunction():
# global f
    f = 'I am learning Python'
    print(f)
someFunction()
print(f)

Variable "f" is global in scope and is assigned value 101 which is printed in output
Variable f is again declared in function and assumes local scope. It is assigned value "I am learning Python." which is printed out as an output. This variable is different from the global variable "f" define earlier
Once the function call is over, the local variable f is destroyed. At line 12, when we again, print the value of "f" is it displays the value of global variable f=101

Now, if we use global keyword, even if it is defined within a function, this becomes a global scoped variables

f = 101;
print f
# Global vs.local variables in functions
def someFunction():
  global f
  print f
  f = "changing global variable"
someFunction()
print f

Variable "f" is global in scope and is assigned value 101 which is printed in output
Variable f is declared using the keyword global. This is NOT a local variable, but the same global variable declared earlier. Hence when we print its value, the output is 101
We changed the value of "f" inside the function. Once the function call is over, the changed value of the variable "f" persists. At line 12, when we again, print the value of "f" is it displays the value "changing global variable"

Deleting a variable

f = 11;
print(f)
del f
print(f)

This gives error at the 4th line saying the variable is not defined


References:
https://www.guru99.com/variables-in-python.html

App center SDK integration

 It is useful App Center iOS SDK in your app to use App Center Analytics and App Center Crashes.

To integrate via CocoaPods, below are the steps

Add the following dependencies to your podfile to include App Center Analytics and App Center Crashes into your app. This will pull in the following frameworks: AppCenter, AppCenterAnalytics, and AppCenterCrashes. Alternatively, you can specify which services you want to use in your app. Each service has its own subspec and they all rely on AppCenter. It will get pulled in automatically.

# Use the following line to use App Center Analytics and Crashes.
 pod 'AppCenter'

 # Use the following lines if you want to specify which service you want to use.
 pod 'AppCenter/Analytics'
 pod 'AppCenter/Crashes'
 pod 'AppCenter/Distribute'

Run pod install to install your newly defined pod and open the project's .xcworkspace.

Now to start it, below are the steps


import AppCenter
import AppCenterAnalytics
import AppCenterCrashes

MSAppCenter.start("{Your App Secret}", withServices: [MSAnalytics.self, MSCrashes.self])
MSAppCenter.start("{Your App Secret}", withServices: [MSAnalytics.self])


References:
https://docs.microsoft.com/en-us/appcenter/sdk/getting-started/ios 

Tuesday, October 15, 2019

Some minor details of Asterisk RTP Port configuration

Asterisk config rtp.conf
rtp.conf
Configuration of Asterisk Real Time Protocol, RTP, media channels. RTP is used for SIP communication.

Details
on your router you might want to arrange both traffic shaping (QoS) and port forwarding (in case of NAT) for the RTP range that you chose
for each RTP port, you also open RTCP port. Therefore a call can consume up to 4 RTP ports.
the first port of the range should be even, so 10001 won’t be used (use 10000 or 10002 instead); the last port must be uneven, and if you specify e.g. 10017 as last in range asterisk will actually use 10018, so be aware!
Question2:
maybe ports aren’t released directly by Asterisk after the call has completed?
does Asterisk allocate RTP ports for each member in a group dial (DIAL(SIP/device1&SIP/device2) before the actual call is established?
check with “netstat -anup” or “netstat -anu” for open ports
experience shows that often Asterisk seems to consume more RTP ports (or RTP port numbers) than one would expect, so it is most probably not a good idea to reduce the RTP port range to exactly 4 times the maximum number of concurrent calls…

References:
https://www.voip-info.org/asterisk-config-rtpconf/

What happens when try to run the node JS C++ module on another env without recompile

Installed the gyp packages like the below

npm install node-gyp --save-dev
npm install node-addon-api

And when try to run the code, it gave error like below. This was screwy!

Error: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/admin/test/server/cppmodule/testaddon.node)
    at Object.Module._extensions..node (internal/modules/cjs/loader.js:992:18)
    at Module.load (internal/modules/cjs/loader.js:798:32)
    at Function.Module._load (internal/modules/cjs/loader.js:711:12)
    at Module.require (internal/modules/cjs/loader.js:838:19)
    at require (internal/modules/cjs/helpers.js:74:18)

trying to search for the details of this error on the internet, below were seen it says that the details can be found at the below path

https://stackoverflow.com/questions/20357033/usr-lib-x86-64-linux-gnu-libstdc-so-6-version-cxxabi-1-3-8-not-found

LD_LIBRARY_PATH=/usr/local/lib64/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH

But this too did not resolve the issue, moreover the contents in the lib64 directory were empty. Not sure if it was hidden.
Having all these issues, thought to compile the code on this machine to avoid these issues.



References:
https://medium.com/@atulanand94/beginners-guide-to-writing-nodejs-addons-using-c-and-n-api-node-addon-api-9b3b718a9a7f
https://stackoverflow.com/questions/20357033/usr-lib-x86-64-linux-gnu-libstdc-so-6-version-cxxabi-1-3-8-not-found

How to install MongoDB on linux AWS

The tutorial in the link in references helped much for this. The steps are fairly straight forward

1. mkdir mongodb
2. cd mongodb/
3. curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.7.tgz
4. tar xvf mongodb-linux-x86_64-3.4.7.tgz
5. mv mongodb-linux-x86_64-3.4.7 mongodb
6. cd mongodb
7. echo $PATH
8. export PATH=$PATH:/home/journal/mongodb/mongodb/bin
9. mkdir data
10. cd bin
11. ./mongod --dbpath /home/journal/mongodb/mongodb/data &

That's it, the DB is up.

Now we can even check the connection using the steps below

root@dev [~]# cd /home/journal/mongodb/mongodb/bin/
root@dev [/home/journal/mongodb/mongodb/bin]# ./mongo
MongoDB shell version: 3.4.7
connecting to: test
> show dbs
admin  (empty)
local  0.078GB
> use journaldev
switched to db journaldev
> db.names.save({"id":123,"name":"Pankaj"})
WriteResult({ "nInserted" : 1 })
> db.names.find()
{ "_id" : ObjectId("53df918adbef24e88560fa5b"), "id" : 123, "name" : "Pankaj" }
> db.datas.save({})
WriteResult({ "nInserted" : 1 })
> show collections
datas
names
system.indexes
> show dbs
admin       (empty)
journaldev  0.078GB
local       0.078GB
> exit
bye
root@dev [/home/journal/mongodb/mongodb/bin]#

References
https://www.journaldev.com/3849/install-mongodb-linux




How to install Mongodb enterprise server

Just browse to this page and download the package by specifying the corresponding package options.
The the package can be chosen based on the OS and Version

As of this writing the version is 4.2.0

MongoDB Enterprise storage data comes with below things

In-memory Storage Engine - deliver high throughput and predictable low latency
Encrypted Storage Engine - encrypt your data at rest
Advanced Security - secure your data with LDAP and Kerberos access controls and comprehensive auditing


References:
https://www.mongodb.com/download-center/enterprise

How to Install GCC on CentOS / RHEL 7

Below are the Cent OS package list contents

autoconf
automake
binutils
bison
flex
gcc (c compiler)
gcc-c++ (c++ compiler)
gettext
libtool
make
patch
pkgconfig
redhat-rpm-config
rpm-build
rpm-sign

Now, we need to install the packages in groups. First to check if the Development Tools package is present, that can be done by the following command

yum group list
This shows the development Tools as available group. Ton install the Development tools, need to login as root and execute the following command

yum group install "Development Tools"

Thats it, it installs a bunch of tools and ready to go.

Now enter cc in terminal, it should show some message like below

cc: fatal error: no input files
compilation terminated.

Great, done!

References:
https://www.cyberciti.biz/faq/centos-rhel-7-redhat-linux-install-gcc-compiler-development-tools/

What is CCPA?



The California Consumer Privacy Act of 2018 (CCPA) is a privacy law that was passed on June 28, 2018, and will take effect on January 1, 2020. This new law will have a significant impact on consumers and certain businesses.

California has consistently passed laws which aim to protect its residents’ privacy, such as the California Online Privacy Protection Act (CalOPPA) and the “Shine the Light” law, and the CCPA is no exception.


References:
https://www.termsfeed.com/blog/ccpa/

Useful df commands in Linux

Just giving df will print mostly cryptic info like below. It is displaying in blocks

df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/cciss/c0d0p2     78361192  23185840  51130588  32% /
/dev/cciss/c0d0p5     24797380  22273432   1243972  95% /home
/dev/cciss/c0d0p3     29753588  25503792   2713984  91% /data
/dev/cciss/c0d0p1       295561     21531    258770   8% /boot
tmpfs                   257476         0    257476   0% /dev/shm


Now, df -h will print the same in more human readable

df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c0d0p2      75G   23G   49G  32% /
/dev/cciss/c0d0p5      24G   22G  1.2G  95% /home
/dev/cciss/c0d0p3      29G   25G  2.6G  91% /data
/dev/cciss/c0d0p1     289M   22M  253M   8% /boot
tmpfs                 252M     0  252M   0% /dev/shm



Below gives information about the home directory

df -hT /home

Filesystem        Type    Size  Used Avail Use% Mounted on
/dev/cciss/c0d0p5    ext3     24G   22G  1.2G  95% /home


-k and -m switches give in bytes and MB

 df -k

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/cciss/c0d0p2     78361192  23187212  51129216  32% /
/dev/cciss/c0d0p5     24797380  22273432   1243972  95% /home
/dev/cciss/c0d0p3     29753588  25503792   2713984  91% /data
/dev/cciss/c0d0p1       295561     21531    258770   8% /boot
tmpfs                   257476         0    257476   0% /dev/shm


References:
https://www.tecmint.com/how-to-check-disk-space-in-linux/

Monday, October 14, 2019

What is difference between Latency and Round Trip Time?



Network latency is how long it takes for something sent from a source host to reach a destination host. There are many components to latency, and the latency can actually be different A to B and B to A.

The round trip time is how long it takes for a request sent from a source to a destination, and for the response to get back to the original source. Basically, the latency in each direction, plus the processing time.


References:
https://networkengineering.stackexchange.com/questions/52232/whats-the-difference-between-latency-and-round-trip-time

What is Jitter

Jitter is the amount of variation in latency/response time, in milliseconds. Reliable connections consistently report back the same latency over and over again. Lots of variation (or 'jitter') is an indication of problems.

Jitter is a symptom of other problems. It's an indicator that there might be something else wrong. Often, this 'something else' is bandwidth saturation (sometimes called congestion) - or not enough bandwidth to handle the traffic load.


To measure Jitter, we take the difference between samples, then divide by the number of samples (minus 1).

Here's an example. We have collected 5 samples with the following latencies: 136, 184, 115, 148, 125 (in that order). The average latency is 142 - (add them, divide by 5). The 'Jitter' is calculated by taking the difference between samples.

136 to 184, diff = 48
184 to 115, diff = 69
115 to 148, diff = 33
148 to 125, diff = 23

(Notice how we have only 4 differences for 5 samples). The total difference is 173 - so the jitter is 173 / 4, or 43.25.

We use this same mechanism no matter how many samples you have - it works on 5, 50 or 5000.

Some apps consider the Jitter value to be not good when it exceeds 15% of the average latency. If average latency on that hop is 150, and jitter is > 22.5, then it will be treated as bad



References:
https://www.pingman.com/kb/article/what-is-jitter-57.html


Automated Audio Quality Tests

What are the benefits of audio quality testing?

Decide on codecs and technologies used for your VoIP solution,
Fine-tune the codec settings to serve your application needs,
Make sure that voice service performs well, even at the worst conditions (bad network, old devices),
Continuously check for any regressions that may appear,
See how you compare with other competitors when it comes to quality metrics of voice call.

These team's logic is as below

you feed original and degraded audio sample into our tool and set audio sample range to narrowband (sample rate up to 8 kHz, used by PSTN calls and older services) or wideband (sample rate >8 kHz, used by modern HD audio codecs). The tool gives out MOS score of the audio – between 1 and 5 (1 for unacceptable to 5 for excellent). The tool will also return delay if it can be calculated – the samples have the delay included between them (the samples are not synced).


Other better environments for testing the software quality are

LTE network – as mobile network is guaranteed, use P2P connections for calls against Wi-Fi network.
50 ms second jitter and 100 ms delay – while extensive for any normal real life conditions, jitter handling is required for any voice application regardless of jitter amount. Delay for the network is added to be able imitate the jitter.
5% and 10% packet loss – ability to recover from lost data is important part of any modern codec. 5% packet loss should result in negligible quality loss for call, while 10% loss normally is much more noticable and a good landmark for checking codec resistance to a bad network. It’s possible to increase the values for deeper investigation.
50 Kbps bandwidth limitation – from our measurements, the data consumption for wideband VoIP call is around 8 kilobytes per second which translates to 40 Kbps bandwidth. 50 kbps bandwidth is only barely in the limits of unaffected voice quality and is a challenge for voice application using unnecessary background data. Possible to reduce to see the threshold at which service still functions.



References
https://www.testdevlab.com/blog/2017/12/how-we-test-audio-quality-in-voip-applications/

Jitter from VoIP scenario


Jitter is a variation in packet transit delay caused by queuing, contention and serialization effects on the path through the network. In general, higher levels of jitter are more likely to occur on either slow or heavily congested links. It is expected that the increasing use of “QoS” control mechanisms such as class based queuing, bandwidth reservation and of higher speed links such as 100 Mbit Ethernet, E3/T3 and SDH will reduce the incidence of jitter related problems at some stage in the future, however jitter will remain a problem for some time to come.

Root cause of Jitter

There are mainly two types of jitter

1. Type A – constant jitter. This is a roughly constant level of packet to packet delay variation.
2. Type B – transient jitter. This is characterized by a substantial incremental delay that may be incurred by a single packet.
3. Type C – short term delay variation. This is characterized by an increase in delay that persists for some number of packets, and may be accompanied by an increase in packet to packet delay variation. Type C jitter is commonly associated with congestion and route changes.


References:
http://www.voiptroubleshooter.com/indepth/jittersources.html

VoIP quality considerations


Latency

This is the time delay in moving the voice packets from the source to the destination. In general this measure should not exceed 150ms in one direction to prevent deterioration of call quality.


Jitter
This is essentially the variability in packet delay. As far as the source endpoint is concerned, the packets have been sent in a continuous stream
But since each packet may take a different route to its destination, network congestion or improper configuration can result in significant variations in packet delay.
Jitter that exceeds 40ms will cause severe deterioration in call quality. High levels of jitter is usually a consequence of slow speeds or congested networks.

Jitter measurement

Jitter may be measured in a number of different ways, several of which are detailed in various IETF standards for RTP such as RFC 3550 and RFC 3611. Some of these methods are Mean packet to packet delay variation, Mean absolute packet delay variation, Packet delay variation histograms and Y.1541 IPDV Parameter.



Very Good RFC reading for RTP
https://tools.ietf.org/html/rfc3550#page-94
https://www.sciencedirect.com/topics/computer-science/jitter => Jitter various things

Node JS - How to perform trace route check


There is an npm package for it!!

const Traceroute = require('nodejs-traceroute');

Below is a reference implementation from the package vendor


const Traceroute = require('nodejs-traceroute');

try {
    const tracer = new Traceroute();
    tracer
        .on('pid', (pid) => {
            console.log(`pid: ${pid}`);
        })
        .on('destination', (destination) => {
            console.log(`destination: ${destination}`);
        })
        .on('hop', (hop) => {
            console.log(`hop: ${JSON.stringify(hop)}`);
        })
        .on('close', (code) => {
            console.log(`close: code ${code}`);
        });

    tracer.trace('github.com');
} catch (ex) {
    console.log(ex);
}


Now having implemented this, looking at the response for the trace route

References:
https://www.npmjs.com/package/nodejs-traceroute

Sunday, October 13, 2019

OWASP APK dissector

OWASP APK Dissector is an automated tool to perform static security analysis of Mobile Application. The tool is uses useful opensource application and tries to automate the process of security analysis. Right now it can perform automation on APK files only and there is a plan to enrich its features.


* Purely Java Based
* Analyze the contents of the APK file
* Decompile and extract the contents of the APK file
* Decompile the DEX files to JAVA source files (.dex to .java) [ New feature in v2.0 ]



References:
https://www.owasp.org/index.php/OWASP_APK_DISSECTOR

Android secure Coding JSSE - part 1 Basic knowledge of Secure design and coding



Basic terminologies

Assest => Something that we want to protect
Threats => Susceptible for attacks
countermeasures => Measures to protect assets from threats

There are two types of assets

1. Information assets -> such as information abut a user
2. Function assets -> Such as functions of a phone. Say, Calling, SMS etc

The above mostly pertains to the smart phone user. However, there is also other details in the application itself.

1. Program portion
2. Data portion

Below is a pictorial representation of this



References:
https://www.jssec.org/dl/android_securecoding_en.pdf

What is Android App Security Improvement Programs



The App Security Improvement program is a service provided to Google Play app developers to improve the security of their apps. The program provides tips and recommendations for building more secure apps and identifies potential security enhancements when your apps are uploaded to Google Play. To date, the program has facilitated developers to fix over 1,000,000 apps on Google Play.

This program gives list of various campaigns and remedies for each of them.

References:
https://developer.android.com/google/play/asi

What is OWASP Zed Attack Proxy Project



The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers*. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. Its also a great tool for experienced pentesters to use for manual security testing.

Main features are:

Intercepting proxy  => Zap can see all request and responses
Active & passing scanners => active scanner performs wide range of attacks
Spider => to find page that is hidden  from the user
Report Generation
Brute Force (Using OWASP DirBuser code)
Fuzzing -> To find subtle vulareabilities that normally other automated scanners cannot find


Other interesting features are:

Auto tagging => this feature tags messages in ZAP so that we can easily see for example which pages have hidden fields
Port scanner => Helps to see which ports are open on the machine
Parameter analysis => Looks through all fo the parameters in the request and finds out which are the parameters in each request
Smart card support => Useful for testing using smart cards or tokens for authentication
Session Comparison => Useful when application supports multiple roles.
External application support => To pass in urls to another application etc
API + Headless mode => ZAP can be run without the UI in headless mode and can be accessed via REST API, which is useful for automated testing
Dynamic SSL certificate => it supports dynamic SSL certificates, we can generate unique root certification authority and ask Browser to trust it to intercept HTTPS traffic.
Anti CSRF token handling => CSRF is Anti Cross Site Request Forgery tokens

To do a Penetration test, Below is the recommended way

- Configure Browser to Proxy via ZAP
- Exlore the application manually
- Use Spider to find hidden content
- See what issues were found by a passive scanner
- Use active scanner to find vulnerabilities

 

References:
https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project

What is SwiftUI




SwiftUI is an innovative, exceptionally simple way to build user interfaces across all Apple platforms with the power of Swift. Build user interfaces for any Apple device using just one set of tools and APIs. With a declarative Swift syntax that’s easy to read and natural to write, SwiftUI works seamlessly with new Xcode design tools to keep your code and design perfectly in sync.

Automatic support for Dynamic Type, Dark Mode, localization, and accessibility means your first line of SwiftUI code is already the most powerful UI code you’ve ever written.



References:
https://developer.apple.com/xcode/swiftui/

Downloading and installing ZAP proxy



There are multiple flavours available at the given url in the references:
Going with the Mac installer first

Downloaded the dmg and gave permissions to open anyway being a third party download.
The machine took some amount of time to do the verification.

Once the tool started, it gave an option to enter the URL to target and allowed to launch in Firefox or Chrome.
If launch using this, it display head over display and shows various warnings and errors.
Also, there is a report section which will give the report exported to download.

Overall, the tool look great

References:
https://github.com/zaproxy/zaproxy/wiki/Downloads