Friday, September 30, 2022

Why Back propagation in RNN isn't effective

If you observe, to compute the gradient wrt the previous hidden state, which is the downstream gradient, the upstream gradient flows through the tanh non-linearity and gets multiplied by the weight matrix. Now, since this downstream gradient flows back across time steps, it means the computation happens over and over again at every time step. There are a couple of problems with this:

Since we’re multiplying over and over again by the weight matrix, the gradient will be scaled up or down depending on the largest singular value of the matrix: if the singular value is greater than 1, we’ll face an exploding gradient problem, and if it’s less than 1, we’ll face a vanishing gradient problem.

Now, the gradient passes through the tanh non-linearity which has saturating regions at the extremes. It means the gradient will essentially become zero if it has a high or low value once it passes through the non-linearity — so the gradient cannot propagate effectively across long sequences and it leads to ineffective optimization.

There is a way to avoid the exploding gradient problem by essentially “clipping” the gradient if it crosses a certain threshold. However, RNN still cannot be used effectively for long sequences.

References:

https://towardsdatascience.com/backpropagation-in-rnn-explained-bdf853b4e1c2#:~:text=You%20see%2C%20a%20RNN%20essentially,where%20they%20are%20summed%20up.

Thursday, September 29, 2022

journalctl logging

Journalctl is a utility for querying and displaying logs from journald, systemd’s logging service. Since journald stores log data in a binary format instead of a plaintext format, journalctl is the standard way of reading log messages processed by journald.


In the following paragraphs, we’ll show you several ways of using journalctl to retrieve, format, and analyze your logs. These methods can be used on their own or in combination with other commands to refine your search. 


When run without any parameters, the following command will show all journal entries, which can be fairly long:


$ journalctl

The entries will start with a banner similar to this which shows the time span covered by the log.


-- Logs begin at Tue 2019-06-11 08:11:07 EDT, end at Mon 2019-06-24 15:18:11 EDT. --

Journalctl splits the results into pages, similar to the less command in Linux. You can navigate using the arrow keys, the Page Up/Page Down keys, and the space bar. To quit navigation, press the Q key.


Long entries are printed to the width of the screen and truncated off at the end if they don’t fit. The cut-off portion can be viewed using the left and right arrow keys.


Boot Messages

Journald tracks each log to a specific system boot. To limit the logs shown to the current boot, use the -b switch.


$ journalctl -b

You can view messages from an earlier boot by passing in its offset from the current boot. For example, the previous boot has an offset of -1, the boot before that is -2, and so on. Here, we are retrieving messages from the last boot:


$ journalctl -b -1

To list the boots of the system, use the following command.


$ journalctl --list-boots

It will show a tabular result like this.


-3 5a035370cc264015a5afcad6e310769f Sun 2019-06-23 09:27:30 EDT—Sun 2019-06-23 11:26:45 EDT


-2 ff65fc7baac14b429a4f41828db669d4 Sun 2019-06-23 11:59:55 EDT—Sun 2019-06-23 12:29:46 EDT


-1 2d54fbdb9fc04087930fd7543f57e922 Sun 2019-06-23 20:29:15 EDT—Sun 2019-06-23 23:01:43 EDT


0 aa2a1cf3cc2143b2a0245403739a336e Mon 2019-06-24 09:23:50 EDT—Mon 2019-06-24 15:18:11 EDT

The first field is the offset (0 being the latest boot, -1 being the boot before that, and so on), followed by a Boot ID (a long hexadecimal number), followed by the time stamps of the first and the last messages related to that boot.


Time Ranges

To see messages logged within a specific time window, we can use the --since and --until options. The following command shows journal messages logged within the last hour.


$ journalctl --since "1 hour ago"

To see messages logged in the last two days, the following command can be used.


$ journalctl --since "2 days ago"

The command below will show messages between two dates and times. All messages logged on or after the since parameter and logged on or before the until parameter will be shown.


$ journalctl --since "2015-06-26 23:15:00" --until "2015-06-26 23:20:00"

For greater accuracy, format the date and time as “YYYY-MM-DD HH:MM:SS”. You can also use any format that follows the systemd.time specification.


By Unit

To see messages logged by any systemd unit, use the -u switch. The command below will show all messages logged by the Nginx web server. You can use the --since and --until switches here to pinpoint web server errors occurring within a time window.


$ journalctl -u nginx.service

The -u switch can be used multiple times to specify more than one unit source. For example, if you want to see log entries for both nginx and mysql, the following command can be used.


$ journalctl -u nginx.service -u mysql.service

Follow or Tail

Journalctl can print log messages to the console as they are added, much like the Linux tail command. To do this, add the -f switch,


$ journalctl -f

For example, this command “follows” the mysql service log.


$ journalctl -u mysql.service -f

To stop following and return to the prompt, press Ctrl+C.


Like the tail command, the -n switch will print the specified number of most recent journal entries. In the command below, we are printing the last 50 messages logged within the last hour.


$ journalctl -n 50 --since "1 hour ago"

The -r parameter shows journal entries in reverse chronological order, so the latest messages are printed first. The command below shows the last 10 messages from the sshd daemon, listed in reverse order.


$ journalctl -u sshd.service -r -n 1


How to check serial number of Cisco Products

The link in references give a good detail of commands 

Essentially, show version and show inventory commands can give these details. 

PID is model name, SN is the serial number. Per interface, this will be available. show version normally show the chassis PID. 

references

https://blog.router-switch.com/2020/08/how-to-check-the-serial-number-of-cisco-products/

Static configuration of DHCP on the IOS-XE devices

it is quite simple, below sequence will do 


Prior to running the commands, needs to have the static-bindings file at the tftp server location with the mapping information like this below 


*time* Sep 27 2022 03:52 PM

*version* 2

!IP address    Type    Hardware address     Lease expiration

10.89.197.7 /21   1       d4ad.71c2.f990       Infinite

*end*


Now can run the below on the device.


enable

configure terminal

ip dhcp pool cemautomation

origin file tftp://<tftp server ip>/static-bindings

end

show ip dhcp binding


references:

https://www.cisco.com/en/US/docs/ios/12_4t/ip_addr/configuration/guide/htdhcpsv.html#wp1093993

Cisco Router how to view all interfaces

Type "show interfaces" and press the "Enter" key to view all MAC addresses on the router. Press the "Space Bar" to scroll through the output one page at a time. Press the "Enter" key to scroll through the output one line at a time.


references 

https://ciscorouterreview.weebly.com/how-to-find-all-ip--mac-addresses-on-a-router.html#:~:text=Type%20%22show%20interfaces%22%20and%20press,one%20line%20at%20a%20time.

PumpKin TFTP client

PumpKIN is an open source, fully functional, free TFTP server and TFTP client, which implements TFTP according to RFC1350 . It also implements block size option, which allows transfer of files over 32MB, as well as transfer size and transfer timeout options described in RFC2348 and RFC2349 .

the main PumpKIN features are:

Fully standard-compliant TFTP file transfer

Unlimited simultaneous transfers both for TFTP server and client

Support for TFTP blocksize option allows transfer of large files if both TFTP server and client support it

Convenient GUI

Combines TFTP server and TFTP client

Originally developed for Windows 95, it reportedly runs on all Win32 platforms: Windows 98, Windows NT, ME, XP, now also ported to Mac OS X (so far only tested on Mountain Lion)

Can run in background, taking up a 256 pixels of screen nicely packed as a 16x16 square in your notification tray area (windows only)

Open source for those willing to add missing features, fix bugs and examine code for potential flaws and fun

You’re free to torture it the way you want to as long as you preserve original author’s credentials

It would cost you nothing unless you’re willing to monetarily express your gratitude and make a donation (yes, it means “free” or “freeware”, just go and download it)

The download size is about that of the high quality screenshot below (windows only — Mac version is bigger, due to graphics supplied for way too many resolutions).

Note that PumpKIN is not an FTP server, neither it is an FTP client, it is a TFTP server and TFTP client. TFTP is not FTP, these are different protocols. TFTP, unlike FTP, is used primarily for transferring files to and from the network equipment (e.g. your router, switch, hub, whatnot firmware upgrade or backup, or configuration backup and restore) that supports using of TFTP server for, not for general purpose serving downloadable files or retrieving files from the FTP servers around the world.


references:

https://kin.klever.net/pumpkin/


Wednesday, September 28, 2022

DHCP sequence diagram , a great tutorial

DHCP is normally used to assign a computer its IP address, as well as other parameters such as the address of the local router. Your computer, the client, uses the DHCP protocol to communicate with a

DHCP server on the local network. Other computers on the local network also interact with the DHCP

server. In deployments, there are several variations. For example, the local agent may be a DHCP relay

that relays messages between local computers and a remote DHCP server. Or the DHCP server may be

replicated for reliability, so that there are two or more local DHCP servers. For our purposes, it is sufficient to think about a single DHCP server.

The complete DHCP exchange involves four types of packets: Discover, for your computer to locate the

DHCP server; Offer, for the server to offer an IP address; Request, for your computer to ask for an offered address; and Ack, for the server to grant the address lease. However, when a computer is re-establishing its IP address on a network that it has previously used, it may perform a short exchange involving only two types of DHCP packets: Request, to ask for the same IP address as from the same server

as was used before; and ACK for the server to grant the address lease.



references:

https://kevincurran.org/com320/labs/wireshark/lab-dhcp.pdf


DHCP various client options

Some options are used by the client to provide the server with enough information to answer more specifically. For example, an IP phone may need some additional information about the registration server, or a graphical passive terminal may require the location of the font server.


Two main options are used in this case: the vendor class identifier (option 60) and the client identifier (option 61). Client identifier is unique and helps the DHCP server to manage its clients and leases, it is generally set to the MAC address of the network interface on a local network. The vendor class identifier is more interesting, as it identifies the vendor type and configuration of a DHCP client in a simple character string. The format is open and can be interpreted by the server in order to adjust the answer options and content.


By analyzing client identifier, class identifier and asked option list in the first phase of the DHCP request helps profiling the client and provide him with appropriate answer.


Example

On a laptop running Windows 10, the options pushed in the initial DHCP Discover frame can look like:


option 61: Client Identifier = 00:e0:4c:36:0a:ac

option 12: The laptop name

option 60: Vendor Class Identifier, here set to MSFT 5.0 (for Microsoft Windows 10)

option 55: Parameter Request List set to:

1: Subnet Mask

3: Router

6: Domain Name Server

15: Domain Name

31: Perform Router Discovery

33: Static Route

43: Vendor Specific Information

44: NetBIOS over TCP/IP Name Server

46: NetBIOS over TCP/IP Node Type

47: NetBIOS over TCP/IP Scope

119: Domain Search

121: Classless Static Route

249: Private/Classless Static Route Microsoft

252: Private/Proxy autodiscovery

For information, this specific Parameter Request List is identified as “Operating System/Windows OS/Microsoft Windows Kernel 10.0” by the Fingerbank API.


references:

https://www.efficientip.com/glossary/dhcp-option/

DHCP Various Server options

Common Options

Here is the list of the most common DHCP options exchanged with clients:


DHCP option 1: subnet mask to be applied on the interface asking for an IP address

DHCP option 3: default router or last resort gateway for this interface

DHCP option 6: which DNS (Domain Name Server) to include in the IP configuration for name resolution

DHCP option 51: lease time for this IP address


Interesting Options

Below is the list of other interesting options that can be provided to clients to ease their initial configuration:


DHCP option 2: time offset in seconds from UTC to be applied on the current time (note: deprecated by RFC4833 – options 100 and 101)

DHCP option 4: list of time server as stated in the RFC868 (Time Protocol)

DHCP option 12: host name of the client, very useful for IoT and any device without user

DHCP option 15: specifies the domain name that client should use as suffix when resolving hostnames via the Domain Name System

DHCP option 42: list of the NTP Servers by order of preference, used for time synchronization of the client

DHCP option 58 and 59: Renewal Time Value (T1) and Rebinding Time Value (T2). See the chapter “DHCP Lease Time Management” on What is DHCP?

DHCP options 69 and 70: respectively for SMTP and POP3 servers for sending and receiving email. We do see these options often on printers and scanners

DHCP option 81: Client Fully Qualified Domain Name – this option allows to perform automatic update of the DNS records associated to the client, mainly the A and PTR. In the option we can specify whether the client or the server will update the records and the FQDN associated to the client. It is defined in the RFC4702

DHCP option 100: time zone POSIX string as in IEEE 1003.1

DHCP option 101: time zone as a string like in the TZ database (eg: Europe/Paris)

DHCP option 119: DNS domain search list that will be used to perform DNS requests based on short name using the suffixes provided in this list.

DHCP option 121: classless static route table composed of multiple network and subnet mask, this option replaces the original one numbered 33 (see RFC3442)


references:

https://www.efficientip.com/glossary/dhcp-option/ 

What is a DHCP Option

Supplying DHCP options is a smart way to configure network clients during the early phase of network access deployment. In addition to providing the IP address, the DHCP protocol is able to set a large bunch of options that are very useful for device configuration.


DHCP is an evolution of the BOOTP protocol (see RFC951) designed at first to bootstrap a diskless client. When starting such a device, BOOTP provides sufficient configuration parameters for obtaining network access, firmware and software locations for images to be downloaded from a network file repository.


X11 diskless display screen no longer exists, but many devices can take advantage of being configured at their network access time – for example IoT devices. BOOTP, which brings additional option configuration to the historical RARP protocol (see RFC903) which provides only IP address, has itself evolved in the DHCP protocol. With pool management and device mobility DHCP is also able to handle a wide list of options to configure a lot of various devices.


Each option has a name and a numerical identifier to be transported in the protocol frames. DHCP server configuration can handle providing options to all devices asking for an IP address and also bound to a specific client identifier or mac address family.


Any client entering the network can ask for specific DHCP options in addition to its IP address (eg vendor class, hostname or authentication credentials). The list of options requested is generally used to fingerprint the DHCP clients on the network. Finally, DHCP options can be inserted by a relay agent that is forwarding a broadcasted request from the local network to a central DHCP server.


references:

https://www.efficientip.com/glossary/dhcp-option/

The OC hierarchy in Optical world

The OC hierarchy goes as follows, starting with a T3/DS3 electrical carrier and then on to an OC-1:

DS3 (Electrical) = 44.736mbits/sec = 28 T1s/DS1s

STS1 (Electrical) = (1) DS3 @ 44.736mbits/sec with SONET (Synchronous Optical NET) overhead = 51.840mbits/sec

OC-1 (Optical) = (1) STS1 on Optical facilities

OC-3 = (3) OC-1s = 155.52mbits/sec

OC-9 = (9) OC-1s (not commonly used) = 466.56mbits/sec

OC-12 = (12) OC-1s or (4) OC-3s = 622.08mbits/sec

OC-18 = (18) OC-1s (not commonly used) = 933.12mbits/sec

OC-24 = (24) OC-1s (not commonly used) = 1.244gbits/sec

OC-36 = (36) OC-1s (not commonly used) = 1.866gbits/sec

OC-48 = (48) OC-1s or (4) OC-12s or (16) OC-3s = 2.488gbits/sec

OC-192= (192) OC-1s or (4) OC-48s or (16) OC-12s or (64) OC-3s = 9.953gbits/sec

The reason for the stair-stepping of the OC Hierarchy is due to the fact that the next available level of multiplexing ('muxing") of lower-level circuits is usually 4: (4) OC-3s = (1) OC-12, and (4) OC-48s = (1) OC-192.

This muxing scheme is usually dictated by the equipment manufacturers and is pretty much an adopted standard in the Telecom industry - hence the lack of the lesser-common bandwidth aggregations like OC-9, OC-18, etc. The only exception is the OC-3, which was needed to allow the upper-level hierarchy to work. Hope this tidbit of info helps in the future!


Compiled by Scott Kindorf, Network Technician, as quoted in Sunbelt W2K News of June 6, 2001.


references

http://www.techtransform.com/id147.htm


SONET Interfaces

SONET is widely used in the USA for very high-speed transmission of voice and data signals across the numerous world-wide fiber-optic networks.


SONET uses LEDs or lasers to transmit a binary stream of light-on and light-off sequences at a constant rate. At the far end, optical sensors convert the pulses of light back to electrical representations of the binary information.


In wavelength-division multiplexing (WDM), light at several different wavelengths (or colors to a human eye) is transmitted on the same fiber segment, greatly increasing the throughput of each fiber cable.


In dense wavelength-division multiplexing (DWDM), many optical data streams at different wavelengths are combined into one fiber.


The basic building block of the SONET hierarchy in the optical domain is OC1; in the electrical domain, the basic building block is STS1. OC1 operates at 51.840 Mbps. OC3 operates at 155.520 Mbps.


A SONET stream can consist of discrete lower-rate traffic flows that have been combined using Time-Division Multiplexing (TDM) techniques. This method is useful, but a portion of the total bandwidth is consumed by the TDM overhead. When a SONET stream consists of only a single, very high-speed payload, it is referred to as operating in concatenated mode. A SONET interface operating in this mode has a “c” added to the rate descriptor. For example, a concatenated OC48 interface is referred to as OC-48c.


DHCP server and DHCP relay

Some network appliances acts as either DHCP Servers or DHCP Relay agents. The DHCP server feature allows devices on the same network as this appliance’s LAN/WAN interface to obtain their IP configuration from the this appliance. The DHCP relay feature allows this appliances to forward DHCP packets between DHCP client and server.


The following are the benefits of using the DHCP server and DHCP relay features:


Reduce the amount of equipment at client site.

Replace router at client site (Easy deployment of edge router services).

Simplify the client site network.

Configuration of Router without CLI commands.

Reduce manual configuration on simple client sites.


DHCP server


It can assigns and manages IP addresses from specified address pools within the network to DHCP clients. The DHCP server can be configured to assign more parameters such as the IP address of the Domain Name System (DNS) server and the default router. DHCP server accepts address assignment requests and renewals. The DHCP server also accepts broadcasts from locally attached LAN segments or from DHCP requests forwarded by other DHCP relay agents within the network.





DHCP relay


A DHCP relay agent is a host or router that forwards DHCP packets between clients and servers. Network administrators can use the DHCP Relay service of the SD-WAN appliances to relay requests and replies between local DHCP Clients and a remote DHCP Server. It allows local hosts to acquire dynamic IP addresses from the remote DHCP Server. Relay agent receives DHCP messages and generates a new DHCP message to send out on another interface.


How to find version of an installed npm package

Use npm list for local packages or npm list -g for globally installed packages.


You can find the version of a specific package by passing its name as an argument. For example, npm list grunt will result in:


projectName@projectVersion /path/to/project/folder

└── grunt@0.4.1


Alternatively, you can just run npm list without passing a package name as an argument to see the versions of all your packages:


├─┬ cli-color@0.1.6

│ └── es5-ext@0.7.1

├── coffee-script@1.3.3

├── less@1.3.0

├─┬ sentry@0.1.2

│ ├── file@0.2.1

│ └── underscore@1.3.3

└── uglify-js@1.2.6


ou can also add --depth=0 argument to list installed packages without their dependencies


https://stackoverflow.com/questions/10972176/find-the-version-of-an-installed-npm-package

Monday, September 26, 2022

T1 , E1 line differences

 





In T or E carrier lines, TDM is used. Any Physical cabling media is a mix of UTP , STP, Coaxial cable, fibre optics, and microwave links. 

T1 and T3 leased lines are very expensive and sometimes a fractional line is used. Even so, they are not suitable for residential users especially there are many other high speed options are available. 

references
https://www.youtube.com/watch?v=1ObEIGaIcb8

Ubuntu install latest node version

This is again using nvm. 

references:

https://www.freecodecamp.org/news/how-to-install-node-js-on-ubuntu-and-update-npm-to-the-latest-version/

Ubuntu - get the active interface and subnet details

Below two commands can give these two information


ip route

nmcli dev show ens192


references:

https://askubuntu.com/questions/197628/how-do-i-find-my-network-ip-address-netmask-and-gateway-info

Saturday, September 24, 2022

AI/ML what is np.random.permutation

A permutation refers to an arrangement of elements. e.g. [3, 2, 1] is a permutation of [1, 2, 3] and vice-versa. The NumPy Random module provides two methods for this: shuffle() and permutation() .


That is, the permutation is a group element and the shuffle is the result of its action on a particular word. One can debate whether this is a good use of terminology (it seems reasonable to me), but that is the distinction being made. It reflects a quite formal mathematical point of view

Randomly permute a sequence, or return a permuted range. If x is a multi-dimensional array, it is only shuffled along its first index. Note. 

from numpy import random

import numpy as np


arr = np.array([1, 2, 3, 4, 5])

print(random.permutation(arr))

references:

https://www.w3schools.com/python/numpy/numpy_random_permutation.asp

AI/ML what is np.linspace

linspace(start, stop, num) returns an array of num evenly spaced numbers in the interval [start, stop]. Set the optional parameter endpoint to False to exclude stop, and set the interval to [start, stop). Set retstep to True optionally to get the step size. Generate evenly spaced arrays using np

linspace is similar to the colon operator, “ : ”, but gives direct control over the number of points and always includes the endpoints. “ lin ” in the name “ linspace ” refers to generating linearly spaced values as opposed to the sibling function logspace , which generates logarithmically spaced values.

linspace allow you to define the number of steps. linspace(0,1,20) : 20 evenly spaced numbers from 0 to 1 (inclusive). arange(0, 10, 2) : however many numbers are needed to go from 0 to 10 (exclusive) in steps of 2. The big difference is that one uses a step value, the other a count

references:

https://www.google.com/search?q=np.linspace&oq=np.linspace&aqs=chrome.0.69i59j0i512l9.400j0j4&sourceid=chrome&ie=UTF-8


AI/ML How to round a numpy array?

 Numpy provides two identical methods to do this. Either use

np.round(data, 2)

or

np.around(data, 2)

as they are equivalent


>>> import numpy as np

>>> a = np.array([0.015, 0.235, 0.112])

>>> np.round(a, 2)

array([0.02, 0.24, 0.11])

>>> np.around(a, 2)

array([0.02, 0.24, 0.11])

>>> np.round(a, 1)

array([0. , 0.2, 0.1])


references:

https://stackoverflow.com/questions/46994426/how-to-round-a-numpy-array


Tuesday, September 20, 2022

AI/ML. what is pip gdown

 If you use curl/wget, it fails with a large file because of the security warning from Google Drive. Supports downloading from Google Drive folders (max 50 files per folder).

Installation

pip install gdown

# to upgrade

pip install --upgrade gdown

ExcelJs beginner guide

It is an extremely useful package which provides the following features:


Creating workbook

Creating worksheet

Handling headers and footers

Setting frozen or split views

Setting auto filters

Data manipulation on rows and columns

Adding data validation

Adding styles

Inserting images to workbook



npm install exceljs

const ExcelJS = require('exceljs');


references:

https://levelup.gitconnected.com/beginners-guide-to-exceljs-63d7834ac08a

XSLX-Template NodeJS

This module provides a means of generating "real" Excel reports (i.e. not CSV files) in NodeJS applications.


The basic principle is this: You create a template in Excel. This can be formatted as you wish, contain formulae etc. In this file, you put placeholders using a specific syntax (see below). In code, you build a map of placeholders to values and then load the template, substitute the placeholders for the relevant values, and generate a new .xlsx file that you can then serve to the user.


Placeholders are inserted in cells in a spreadsheet. It does not matter how those cells are formatted, so e.g. it is OK to insert a placeholder (which is text content) into a cell formatted as a number or currecy or date, if you expect the placeholder to resolve to a number or currency or date.


Scalars

Simple placholders take the format ${name}. Here, name is the name of a key in the placeholders map. The value of this placholder here should be a scalar, i.e. not an array or object. The placeholder may appear on its own in a cell, or as part of a text string. For example:

| Extracted on: | ${extractDate} |

Tables

Finally, you can build tables made up of multiple rows. In this case, each placeholder should be prefixed by table: and contain both the name of the placeholder variable (a list of objects) and a key (in each object in the list). For example:


| Name                 | Age                 |

| ${table:people.name} | ${table:people.age} |

https://www.npmjs.com/package/xlsx-template

Monday, September 19, 2022

AI/ML ReduceLROnPlateau Keras

Reduce learning rate when a metric has stopped improving.

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.


tf.keras.callbacks.ReduceLROnPlateau(

    monitor="val_loss",

    factor=0.1,

    patience=10,

    verbose=0,

    mode="auto",

    min_delta=0.0001,

    cooldown=0,

    min_lr=0,

    **kwargs

)


reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,

                              patience=5, min_lr=0.001)


model.fit(X_train, Y_train, callbacks=[reduce_lr])


monitor: quantity to be monitored.

factor: factor by which the learning rate will be reduced. new_lr = lr * factor.

patience: number of epochs with no improvement after which learning rate will be reduced.

verbose: int. 0: quiet, 1: update messages.

mode: one of {'auto', 'min', 'max'}. In 'min' mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in 'max' mode it will be reduced when the quantity monitored has stopped increasing; in 'auto' mode, the direction is automatically inferred from the name of the monitored quantity.

min_delta: threshold for measuring the new optimum, to only focus on significant changes.

cooldown: number of epochs to wait before resuming normal operation after lr has been reduced.

min_lr: lower bound on the learning rate.



References:

https://keras.io/api/callbacks/reduce_lr_on_plateau/


AI/ML Python warpaffine() functionality

 OpenCV provides two transformation functions, cv2.warpAffine and cv2.warpPerspective, with which you can have all kinds of transformations. cv2.warpAffine takes a 2x3 transformation matrix while cv2.warpPerspective takes a 3x3 transformation matrix as input.

Scaling is just resizing of the image. OpenCV comes with a function cv2.resize() for this purpose. The size of the image can be specified manually, or you can specify the scaling factor. Different interpolation methods are used. Preferable interpolation methods are cv2.INTER_AREA for shrinking and cv2.INTER_CUBIC (slow) & cv2.INTER_LINEAR for zooming. By default, interpolation method used is cv2.INTER_LINEAR for all resizing purposes. You can resize an input image either of following methods:

Translation is the shifting of object’s location. If you know the shift in (x,y) direction, let it be (t_x,t_y), you can create the transformation matrix \textbf{M} as follows:

You can take make it into a Numpy array of type np.float32 and pass it into cv2.warpAffine() function. See below example for a shift of (100,50):

import cv2

import numpy as np


img = cv2.imread('messi5.jpg',0)

rows,cols = img.shape


M = np.float32([[1,0,100],[0,1,50]])

dst = cv2.warpAffine(img,M,(cols,rows))


cv2.imshow('img',dst)

cv2.waitKey(0)

cv2.destroyAllWindows()




In affine transformation, all parallel lines in the original image will still be parallel in the output image. To find the transformation matrix, we need three points from input image and their corresponding locations in output image. Then cv2.getAffineTransform will create a 2x3 matrix which is to be passed to cv2.warpAffine.


img = cv2.imread('drawing.png')

rows,cols,ch = img.shape


pts1 = np.float32([[50,50],[200,50],[50,200]])

pts2 = np.float32([[10,100],[200,50],[100,250]])


M = cv2.getAffineTransform(pts1,pts2)


dst = cv2.warpAffine(img,M,(cols,rows))


plt.subplot(121),plt.imshow(img),plt.title('Input')

plt.subplot(122),plt.imshow(dst),plt.title('Output')

plt.show()


For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain straight even after the transformation. To find this transformation matrix, you need 4 points on the input image and corresponding points on the output image. Among these 4 points, 3 of them should not be collinear. Then transformation matrix can be found by the function cv2.getPerspectiveTransform. Then apply cv2.warpPerspective with this 3x3 transformation matrix.




img = cv2.imread('sudokusmall.png')

rows,cols,ch = img.shape


pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])

pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])


M = cv2.getPerspectiveTransform(pts1,pts2)


dst = cv2.warpPerspective(img,M,(300,300))


plt.subplot(121),plt.imshow(img),plt.title('Input')

plt.subplot(122),plt.imshow(dst),plt.title('Output')

plt.show()

references:

https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html


AI/ML - What is ABC and ABCMeta in Python

abc.ABC basically just an extra layer over metaclass=abc.ABCMeta. i.e abc.ABC implicitly defines the metaclass for us.

(Source: https://hg.python.org/cpython/file/3.4/Lib/abc.py#l234)

class ABC(metaclass=ABCMeta):

    """Helper class that provides a standard way to create an ABC using

    inheritance.

    """

    pass

The only difference is that in the former case you need a simple inheritance and in the latter you need to specify the metaclass.

From What's new in Python 3.4(emphasis mine):

New class ABC has ABCMeta as its meta class. Using ABC as a base class has essentially the same effect as specifying metaclass=abc.ABCMeta, but is simpler to type and easier to read.

Thursday, September 15, 2022

Ubuntu how to check if DHCP server service is running


references: 

https://serverfault.com/questions/171744/command-line-program-to-test-dhcp-service 

Wednesday, September 14, 2022

Ubuntu how to check if package is installed

dpkg -l | grep dhcp

The above command will help to search for package that has name hdcp 

result will be something like below 


ii  isc-dhcp-client                            4.4.1-2.1ubuntu5.20.04.3            amd64        DHCP client for automatically obtaining an IP address

ii  isc-dhcp-common                            4.4.1-2.1ubuntu5.20.04.3            amd64        common manpages relevant to all of the isc-dhcp packages

Now to list all packages below can be used

dpkg -l | less 

references:
https://askubuntu.com/questions/423355/how-do-i-check-if-a-package-is-installed-on-my-server 

Tuesday, September 13, 2022

How DHCP works?

Dynamic Host Configuration Protocol (DHCP) is a network service for automatically assigning IP Addresses to clients on a network. It follows a server-client architecture where the client requests a DHCP Server to get an IP Address. Most routers have a DHCP server built-in but we can use our own DHCP Server too.


When the computer boots up it doesn’t have an IP Address (Assuming it doesn’t have static IP Addressing configured, which most of the machines don’t have). It sends a broadcast (on the MAC Address with all F’s) called a DHCP Discover. DHCP Servers are designed to respond to such broadcasts.


They then send unicast traffic known as the DHCP Offer back to the requesting client. This DHCP Offer typically contains the Assigned IP Address, the Default Gateway’s IP Address, and the DNS Server’s IP Address.


The client on receiving the Offer sends a DHCP Request to the DHCP Server acknowledging that it has accepted the information given to it by the server.


DHCP Servers keep a record of the assigned IP Addresses to prevent double assignment or IP Address Collisions.


Since DHCP Servers respond to broadcast, they must be present on the local network and there shouldn’t be more than 1 DHCP Server on a local network.


Allocation Methods for DHCP

Following are the two allocation methods for a DHCP Server:


Manual: In this method, the IP Address is given on the basis of the MAC Address. This ensures that a particular machine gets a fixed IP Address as its IP Address is then tied to its MAC Address. The DHCP Server sends a constant configuration to the client depending on its MAC Address in this type of allocation.

Automatic: In this method, the IP Addresses are assigned automatically by the DHCP Server on a first-come, first-served basis from a pool of addresses. It can be further divided into two categories based on the Lease Time – The time for which an IP Address is assigned to a client.

Fixed Lease Time: When a DHCP client is no longer on the network for a specified period, the configuration is expired and released back to the address pool for use by other DHCP Clients. The client has to renegotiate to keep the previous IP Address.

Infinite Lease Time: This has the effect of permanently assigning an IP Address to a client.



references

https://www.linuxfordevices.com/tutorials/ubuntu/dhcp-server-on-ubuntu

AI/ML Google Collab - Prompting for file upload

from google.colab import files

uploaded = files.upload()


This shows up prompt like this 

Once uploaded, the file by default resides under the root of the Collab file system 



AI/ML Google Collab - Mounting Google Drive onto Google Collab

 To mount the Google drive on to Collab, 

from google.colab import drive

drive.mount('/content/drive')


Now after this the file can be accessed like this below 
In this case below, The MyDrive is the folder 

from os.path import exists
project_folder='/content/drive/MyDrive/TestFiles/Project_data.zip'

import zipfile
file_exists = exists(project_folder)
print("File Exists ", file_exists)
with zipfile.ZipFile(project_folder, 'r') as zip_ref:
    zip_ref.extractall('Project_data')

In this case, 
MyDrive was the root folder name that was displayed on the Google Collab file browser after mounting. 
TestFiles folder contains the zip file which is tried to unzip. Unzipped files contain by default at the root folder in the default file system in Collab ( not drive) 


What is a TFTP Server

 TFTP (Trivial File Transfer Protocol) is a simplified version of FTP (File Transfer Protocol). It was designed to be easy and simple. TFTP leaves out many authentication features of FTP and it runs on UDP port 69. As it is very lightweight, it is still used for different purposes.

TFTP is used in places where you don’t need much security. Instead, you need a way to easily upload files to and download files from the server. CISCO devices use TFTP protocol to store configuration files and CISCO IOS images for backup purposes. The network boot protocols such as BOOTP, PXE etc uses TFTP to boot operating systems over the network. Thin clients also use TFTP protocol for booting operating systems. Many electronics circuit boards, microprocessors also use TFTP to download firmware into the chip. Overall, TFTP has many uses even today.

references:

https://linuxhint.com/install_tftp_server_ubuntu/ 

git list branches with creator names

 this seems to be a useful command 

git for-each-ref --format='%(committerdate) %09 %(authorname) %09 %(refname)' --sort=committerdate

references:

https://stackoverflow.com/questions/12055198/find-out-a-git-branch-creator/19135644#19135644 

IP Address - convert to decimal from dotted notation

 This is a pretty useful site to convert from dotted notation to decimal notation 

for e.g. decimal notation for 10.10.10.10 is 

168430090 


references:

https://www.ipaddressguide.com/ip


Mongo - Online editors

 For Mongo, to experiment various Mongo related queries etc. Below page can be used

https://mongoplayground.net/

The docs are available at the above same link. We can even share the documents and queries that are kept in the page and share to people.

references:

https://mongoplayground.net/

Sunday, September 11, 2022

Python Major modules required for AI/ML

pip install opencv-python  => For OpenCV 

pip install keras

pip install opencv-python

pip install tensorflow

pip install PIL


Some of the import statements are

from imageio import imread 
import random as rn
from keras import backend as K
import tensorflow as tf
import matplotlib.pyplot as plt
import cv2

 

Python how to print GPU info


pip install gputil

pip install tabulate


However this below was returning empty on an Macbook M1 Pro Chip


 import GPUtil

from tabulate import tabulate

print("="*40, "GPU Details", "="*40)

gpus = GPUtil.getGPUs()

print("GPU list ",gpus)

list_gpus = []

for gpu in gpus:

    # get the GPU id

    gpu_id = gpu.id

    # name of GPU

    gpu_name = gpu.name

    # get % percentage of GPU usage of that GPU

    gpu_load = f"{gpu.load*100}%"

    # get free memory in MB format

    gpu_free_memory = f"{gpu.memoryFree}MB"

    # get used memory

    gpu_used_memory = f"{gpu.memoryUsed}MB"

    # get total memory

    gpu_total_memory = f"{gpu.memoryTotal}MB"

    # get GPU temperature in Celsius

    gpu_temperature = f"{gpu.temperature} °C"

    gpu_uuid = gpu.uuid

    list_gpus.append((

        gpu_id, gpu_name, gpu_load, gpu_free_memory, gpu_used_memory,

        gpu_total_memory, gpu_temperature, gpu_uuid

    ))

print(tabulate(list_gpus, headers=("id", "name", "load", "free memory", "used memory", "total memory", "temperature", "uuid")))

Python how to print the System info

 import platform,socket,re,uuid,json,psutil,logging


def getSystemInfo():

    try:

        info={}

        info['platform']=platform.system()

        info['platform-release']=platform.release()

        info['platform-version']=platform.version()

        info['architecture']=platform.machine()

        info['hostname']=socket.gethostname()

        info['ip-address']=socket.gethostbyname(socket.gethostname())

        info['mac-address']=':'.join(re.findall('..', '%012x' % uuid.getnode()))

        info['processor']=platform.processor()

        info['ram']=str(round(psutil.virtual_memory().total / (1024.0 **3)))+" GB"

        return json.dumps(info)

    except Exception as e:

        logging.exception(e)


json.loads(getSystemInfo())


Docker postgres container how to export and import data from

Below two commands help to do that 

docker exec -u postgres test_postgres pg_dump -Cc | xz > test-backup-$(date -u +%Y-%m-%d).sql.xz

xz -dc test-backup-2022-09-11.sql.xz | docker exec -i -u postgres test_postgres psql 


Below is some explanation of the switch options 

-u postgres 

We want to run the command as the postgres user because the docker exec command defaults to using the root user and the root user does not have access to the database.

test-postgres

This is the name of the Docker container running PostgreSQL. If you created the container with a different name, substitute it here.

pg_dump

pg_dump is the PostgreSQL database backup utility. It converts a database to an SQL script. It can also convert to some other formats, but we aren’t going to use those right now.

-Cc

Equivalent to –create –clean.

–create tells pg_dump to include tables, views, and functions in the backup, not just the data contained in the tables.

–clean tells pg_dump to start the SQL script by dropping the data that is currently in the database. This makes it easier to restore in one step.

| xz

We run the SQL script through a compression program (in this case, xz) to reduce the amount of storage space taken by the backup and to reduce the time it takes to transfer the backup over the network. This is optional, and other commands can be used in place of xz, such as gzip and bzip2. To get even better compression, the -9 and/or -e options can be specified. -9 makes xz use much more memory, and -e makes xz use much more CPU power. However, the default compression level should be good enough in nearly every case.

> proget-backup-$(date -u +%Y-%m-%d).sql.xz

The compressed SQL script is currently being written on the standard output, so we redirect it to a file with a name like proget-backup-2022-09-11.sql.xz. This will be placed in the current directory when you run the command.

xz -dc proget-backup-2022-09-11.sql.xz |

Because we compressed the SQL script in the previous command, we need to decompress it before we can restore the backup. -dc is equivalent to –decompress –stdout.

–decompress tells xz that we want to decompress the file, not compress it again.

–stdout tells xz that it should write the contents of the file to the standard output and not delete the .xz file. Without this, xz will write the output to a file named proget-backup-2022-09-11.sql and delete the compressed version.

gzip and bzip2 both use the same meaning for -dc as xz.

-i

This tells Docker to keep the standard input open so the SQL script can be sent to psql.

psql

This is the PostgreSQL interactive SQL command line. In this case, we’re using it to run the SQL script containing the database backup.

–set ON_ERROR_STOP=on

Tells psql to stop executing the restore if an error occurs.

–single-transaction

Tells psql to run the entire restore in one transaction so that any problem that causes it to stop doesn’t leave the database in an inconsistent state.


references:
https://inedo.com/support/kb/1145/accessing-a-postgresql-database-in-a-docker-container

Docker postgres environment variables

 

POSTGRES_PASSWORD

POSTGRES_USER

PGDATA

POSTGRES_DB

POSTGRES_INITDB_ARGS


Environment variables can be overridden using below three methods depending on your case.


Run Image: If running docker image directly, use below to include environment variable in docker run with -e K=V. Like mentioned here https://docs.docker.com/engine/reference/run/


Dockerfile: If you need to specify the environment variable in Dockerfile, specify as mentioned below


ENV POSTGRES_PASSWORD=secrect

ENV POSTGRES_USER=postgres



Docker how to shell into bash

 the general format is 

docker exec -it (container) (command)


docker exec -t -i container_name /bin/bash

docker exec -ti container_name /bin/bash

docker exec -ti container_name sh


Use opener (https://github.com/artemkaxboy/docker-opener):

requires to add alias in your environment opener wordpress

works anywhere docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock artemkaxboy/opener wordpress

Instead of wordpress you can use name or id or image-name of any container you want to connect

references


Docker copying from to host

 sudo docker cp container-id:/path/filename.txt ~/Desktop/filename.txt

Obtain the name or id of the Docker container

Issue the docker cp command and reference the container name or id

The first parameter of the docker copy command is the path to the file inside the container

The second parameter of the docker copy command is the location to save the file on the host

Edit and use the file copied from inside the container to your host machine

references:

https://www.theserverside.com/blog/Coffee-Talk-Java-News-Stories-and-Opinions/How-to-copy-files-from-a-Docker-container-to-a-host-machine