Monday, May 22, 2023

Kubernetes how to get the CPU and memory usage

ZnJvbSBrdWJlcm5ldGVzIGltcG9ydCBjbGllbnQsIGNvbmZpZwoKIyBMb2FkIEt1YmVybmV0ZXMgY29uZmlndXJhdGlvbiBmcm9tIGRlZmF1bHQgbG9jYXRpb24gb3IgcHJvdmlkZSB5b3VyIG93biBrdWJlY29uZmlnIGZpbGUKY29uZmlnLmxvYWRfa3ViZV9jb25maWcoKQoKIyBDcmVhdGUgS3ViZXJuZXRlcyBBUEkgY2xpZW50CmFwaV9jbGllbnQgPSBjbGllbnQuQXBpQ2xpZW50KCkKCiMgUmV0cmlldmUgbGlzdCBvZiBub2RlcyBpbiB0aGUgY2x1c3Rlcgp2MSA9IGNsaWVudC5Db3JlVjFBcGkoYXBpX2NsaWVudCkKbm9kZV9saXN0ID0gdjEubGlzdF9ub2RlKCkuaXRlbXMKCiMgSXRlcmF0ZSBvdmVyIGVhY2ggbm9kZSBhbmQgcmV0cmlldmUgc3lzdGVtIHJlc291cmNlIHVzYWdlCmZvciBub2RlIGluIG5vZGVfbGlzdDoKICAgIG5vZGVfbmFtZSA9IG5vZGUubWV0YWRhdGEubmFtZQogICAgcHJpbnQoZiJOb2RlOiB7bm9kZV9uYW1lfSIpCgogICAgIyBSZXRyaWV2ZSBub2RlIG1ldHJpY3MKICAgIG1ldHJpY3NfYXBpID0gY2xpZW50LkN1c3RvbU9iamVjdHNBcGkoYXBpX2NsaWVudCkKICAgIG1ldHJpY3MgPSBtZXRyaWNzX2FwaS5saXN0X2NsdXN0ZXJfY3VzdG9tX29iamVjdCgibWV0cmljcy5rOHMuaW8iLCAidjFiZXRhMSIsICJub2RlcyIsICJub2RlIikKCiAgICAjIEZpbmQgdGhlIG1ldHJpY3MgZm9yIHRoZSBjdXJyZW50IG5vZGUKICAgIGZvciBtZXRyaWMgaW4gbWV0cmljc1siaXRlbXMiXToKICAgICAgICBpZiBtZXRyaWNbIm1ldGFkYXRhIl1bIm5hbWUiXSA9PSBub2RlX25hbWU6CiAgICAgICAgICAgIGNwdV91c2FnZSA9IG1ldHJpY1sidXNhZ2UiXVsiY3B1Il0KICAgICAgICAgICAgbWVtb3J5X3VzYWdlID0gbWV0cmljWyJ1c2FnZSJdWyJtZW1vcnkiXQogICAgICAgICAgICBwcmludChmIkNQVSBVc2FnZToge2NwdV91c2FnZX0iKQogICAgICAgICAgICBwcmludChmIk1lbW9yeSBVc2FnZToge21lbW9yeV91c2FnZX0iKQoKICAgIHByaW50KCItLS0tLS0tLS0tLS0tLS0tLS0tLSIp



This code snippet assumes you have the Kubernetes Python client library (kubernetes) installed. If you haven't installed it, you can use pip install kubernetes to install it.


The code first loads the Kubernetes configuration using config.load_kube_config(). This assumes that you have configured your Kubernetes credentials, or it uses the default configuration if available. Alternatively, you can provide the path to your kubeconfig file using config.load_kube_config(config_file='path/to/kubeconfig').


Then, it creates a Kubernetes API client using client.ApiClient(). This client will be used to interact with the Kubernetes API.


Next, it retrieves the list of nodes in the cluster using the list_node() method from the CoreV1Api class.


The code then iterates over each node and retrieves the system resource usage for each node. It uses the CustomObjectsApi class to query the metrics API and retrieve the node metrics. The metrics are stored in the metrics variable.


Within the loop, it searches for the metrics corresponding to the current node using the node name. It retrieves the CPU and memory usage from the metrics data.


Finally, it prints the node name, CPU usage, and memory usage for each node.


Please note that this code assumes that the metrics server is installed in your Kubernetes cluster and available at the metrics.k8s.io API group. If your cluster does not have the metrics server installed or uses a different metrics provider, you may need to adjust the code accordingly.


Also, ensure that you have the necessary permissions to access the metrics server and retrieve the required metrics.

Time Series analysis

 RFC

aW1wb3J0IHBhbmRhcyBhcyBwZApmcm9tIHNrbGVhcm4uZW5zZW1ibGUgaW1wb3J0IFJhbmRvbUZvcmVzdENsYXNzaWZpZXIKZnJvbSBza2xlYXJuLm1vZGVsX3NlbGVjdGlvbiBpbXBvcnQgdHJhaW5fdGVzdF9zcGxpdApmcm9tIHNrbGVhcm4ubWV0cmljcyBpbXBvcnQgYWNjdXJhY3lfc2NvcmUKCiMgUmVhZCBsb2cgZmlsZSBpbnRvIGEgcGFuZGFzIERhdGFGcmFtZQpkZiA9IHBkLnJlYWRfY3N2KCdsb2dmaWxlLmNzdicsIHBhcnNlX2RhdGVzPVsndGltZXN0YW1wJ10pCgojIFNldCB0aW1lc3RhbXAgYXMgdGhlIGluZGV4CmRmLnNldF9pbmRleCgndGltZXN0YW1wJywgaW5wbGFjZT1UcnVlKQoKIyBTZWxlY3QgdGhlIHJlbGV2YW50IGRlcGVuZGVudCB2YXJpYWJsZXMgYW5kIHRhcmdldCB2YXJpYWJsZSBmb3IgbW9kZWxpbmcKIyBSZXBsYWNlICd2YXIxJywgJ3ZhcjInLCAndmFyMycgd2l0aCB5b3VyIG93biB2YXJpYWJsZSBuYW1lcwojICd0YXJnZXQnIHNob3VsZCByZXByZXNlbnQgd2hldGhlciB0aGUgc2VydmVyIHdlbnQgZG93biAoMSkgb3Igbm90ICgwKQpkYXRhID0gZGZbWyd2YXIxJywgJ3ZhcjInLCAndmFyMycsICd0YXJnZXQnXV0KCiMgU3BsaXQgdGhlIGRhdGEgaW50byBmZWF0dXJlcyAoWCkgYW5kIHRhcmdldCAoeSkKWCA9IGRhdGFbWyd2YXIxJywgJ3ZhcjInLCAndmFyMyddXQp5ID0gZGF0YVsndGFyZ2V0J10KCiMgU3BsaXQgdGhlIGRhdGEgaW50byB0cmFpbiBhbmQgdGVzdCBzZXRzClhfdHJhaW4sIFhfdGVzdCwgeV90cmFpbiwgeV90ZXN0ID0gdHJhaW5fdGVzdF9zcGxpdChYLCB5LCB0ZXN0X3NpemU9MC4yLCByYW5kb21fc3RhdGU9NDIpCgojIENyZWF0ZSBhbmQgZml0IHRoZSBSYW5kb20gRm9yZXN0IENsYXNzaWZpZXIKbW9kZWwgPSBSYW5kb21Gb3Jlc3RDbGFzc2lmaWVyKCkKbW9kZWwuZml0KFhfdHJhaW4sIHlfdHJhaW4pCgojIFBlcmZvcm0gcHJlZGljdGlvbnMgb24gdGhlIHRlc3Qgc2V0CnByZWRpY3Rpb25zID0gbW9kZWwucHJlZGljdChYX3Rlc3QpCgojIENhbGN1bGF0ZSBhY2N1cmFjeQphY2N1cmFjeSA9IGFjY3VyYWN5X3Njb3JlKHlfdGVzdCwgcHJlZGljdGlvbnMpCnByaW50KGYnQWNjdXJhY3k6IHthY2N1cmFjeX0nKQoKIyBZb3UgY2FuIGFsc28gdXNlIHRoZSB0cmFpbmVkIG1vZGVsIHRvIHByZWRpY3Qgb24gbmV3IGRhdGEKIyBuZXdfZGF0YSA9IHBkLkRhdGFGcmFtZSh7J3ZhcjEnOiBbdmFsMV0sICd2YXIyJzogW3ZhbDJdLCAndmFyMyc6IFt2YWwzXX0pCiMgbmV3X3ByZWRpY3Rpb24gPSBtb2RlbC5wcmVkaWN0KG5ld19kYXRhKQ==



TSA Vector Auto Regression 

aW1wb3J0IHBhbmRhcyBhcyBwZApmcm9tIHN0YXRzbW9kZWxzLnRzYS52ZWN0b3JfYXIudmFyX21vZGVsIGltcG9ydCBWQVIKZnJvbSBza2xlYXJuLm1ldHJpY3MgaW1wb3J0IG1lYW5fc3F1YXJlZF9lcnJvcgppbXBvcnQgbWF0cGxvdGxpYi5weXBsb3QgYXMgcGx0CgojIFJlYWQgbG9nIGZpbGUgaW50byBhIHBhbmRhcyBEYXRhRnJhbWUKZGYgPSBwZC5yZWFkX2NzdignbG9nZmlsZS5jc3YnLCBwYXJzZV9kYXRlcz1bJ3RpbWVzdGFtcCddKQoKIyBTZXQgdGltZXN0YW1wIGFzIHRoZSBpbmRleApkZi5zZXRfaW5kZXgoJ3RpbWVzdGFtcCcsIGlucGxhY2U9VHJ1ZSkKCiMgU2VsZWN0IHRoZSByZWxldmFudCBkZXBlbmRlbnQgdmFyaWFibGVzIGZvciBtb2RlbGluZwpkYXRhID0gZGZbWyd2YXIxJywgJ3ZhcjInLCAndmFyMyddXSAgIyBSZXBsYWNlICd2YXIxJywgJ3ZhcjInLCAndmFyMycgd2l0aCB5b3VyIG93biB2YXJpYWJsZSBuYW1lcwoKIyBTcGxpdCB0aGUgZGF0YSBpbnRvIHRyYWluIGFuZCB0ZXN0IHNldHMKdHJhaW5fc2l6ZSA9IGludCgwLjggKiBsZW4oZGF0YSkpCnRyYWluX2RhdGEsIHRlc3RfZGF0YSA9IGRhdGFbOnRyYWluX3NpemVdLCBkYXRhW3RyYWluX3NpemU6XQoKIyBDcmVhdGUgYW5kIGZpdCB0aGUgVkFSIG1vZGVsCm1vZGVsID0gVkFSKHRyYWluX2RhdGEpCm1vZGVsX2ZpdCA9IG1vZGVsLmZpdCgpCgojIFBlcmZvcm0gcHJlZGljdGlvbnMgb24gdGhlIHRlc3Qgc2V0CmxhZ19vcmRlciA9IG1vZGVsX2ZpdC5rX2FyCnByZWRpY3Rpb25zID0gbW9kZWxfZml0LmZvcmVjYXN0KHRlc3RfZGF0YS52YWx1ZXMsIHN0ZXBzPWxlbih0ZXN0X2RhdGEpKQoKIyBDYWxjdWxhdGUgcm9vdCBtZWFuIHNxdWFyZWQgZXJyb3IgKFJNU0UpCnJtc2UgPSBtZWFuX3NxdWFyZWRfZXJyb3IodGVzdF9kYXRhLCBwcmVkaWN0aW9ucywgc3F1YXJlZD1GYWxzZSkKcHJpbnQoZidSTVNFOiB7cm1zZX0nKQoKIyBQbG90IHRoZSBhY3R1YWwgdmFsdWVzIGFuZCBwcmVkaWN0ZWQgdmFsdWVzCnBsdC5maWd1cmUoZmlnc2l6ZT0oMTIsIDYpKQpwbHQucGxvdCh0ZXN0X2RhdGEuaW5kZXgsIHRlc3RfZGF0YVsndmFyMSddLCBsYWJlbD0nQWN0dWFsJykKcGx0LnBsb3QodGVzdF9kYXRhLmluZGV4LCBwcmVkaWN0aW9uc1s6LCAwXSwgbGFiZWw9J1ByZWRpY3RlZCcpCnBsdC50aXRsZSgnVGltZSBTZXJpZXMgRm9yZWNhc3RpbmcnKQpwbHQueGxhYmVsKCdEYXRlJykKcGx0LnlsYWJlbCgnVmFyaWFibGUgMScpCnBsdC5sZWdlbmQoKQpwbHQuZ3JpZChUcnVlKQpwbHQuc2hvdygp


SARIMAX 

aW1wb3J0IHBhbmRhcyBhcyBwZApmcm9tIHN0YXRzbW9kZWxzLnRzYS5zdGF0ZXNwYWNlLnNhcmltYXggaW1wb3J0IFNBUklNQVgKaW1wb3J0IG1hdHBsb3RsaWIucHlwbG90IGFzIHBsdAoKIyBSZWFkIGxvZyBmaWxlIGludG8gYSBwYW5kYXMgRGF0YUZyYW1lCmRmID0gcGQucmVhZF9jc3YoJ2xvZ2ZpbGUuY3N2JywgcGFyc2VfZGF0ZXM9Wyd0aW1lc3RhbXAnXSkKCiMgU2V0IHRpbWVzdGFtcCBhcyB0aGUgaW5kZXgKZGYuc2V0X2luZGV4KCd0aW1lc3RhbXAnLCBpbnBsYWNlPVRydWUpCgojIFByZXBhcmUgdGhlIGRhdGEgZm9yIG1vZGVsaW5nCiMgQXNzdW1pbmcgdGhlICdjb3VudCcgY29sdW1uIHJlcHJlc2VudHMgdGhlIHN5c3RlbSBlcnJvciBjb3VudApkYXRhID0gZGZbWydjb3VudCddXS5jb3B5KCkKCiMgU3BsaXQgdGhlIGRhdGEgaW50byB0cmFpbiBhbmQgdGVzdCBzZXRzCnRyYWluX3NpemUgPSBpbnQoMC44ICogbGVuKGRhdGEpKQp0cmFpbl9kYXRhLCB0ZXN0X2RhdGEgPSBkYXRhWzp0cmFpbl9zaXplXSwgZGF0YVt0cmFpbl9zaXplOl0KCiMgQ3JlYXRlIGFuZCBmaXQgdGhlIFNBUklNQVggbW9kZWwKbW9kZWwgPSBTQVJJTUFYKHRyYWluX2RhdGEsIG9yZGVyPSgxLCAxLCAxKSwgc2Vhc29uYWxfb3JkZXI9KDEsIDEsIDEsIDI0KSkKbW9kZWxfZml0ID0gbW9kZWwuZml0KCkKCiMgUGVyZm9ybSBwcmVkaWN0aW9ucyBvbiB0aGUgdGVzdCBzZXQKcHJlZGljdGlvbnMgPSBtb2RlbF9maXQucHJlZGljdChzdGFydD1sZW4odHJhaW5fZGF0YSksIGVuZD1sZW4odHJhaW5fZGF0YSkgKyBsZW4odGVzdF9kYXRhKSAtIDEpCgojIFBsb3QgdGhlIGFjdHVhbCB2YWx1ZXMgYW5kIHByZWRpY3RlZCB2YWx1ZXMKcGx0LmZpZ3VyZShmaWdzaXplPSgxMiwgNikpCnBsdC5wbG90KGRhdGEuaW5kZXhbdHJhaW5fc2l6ZTpdLCB0ZXN0X2RhdGEsIGxhYmVsPSdBY3R1YWwnKQpwbHQucGxvdChkYXRhLmluZGV4W3RyYWluX3NpemU6XSwgcHJlZGljdGlvbnMsIGxhYmVsPSdQcmVkaWN0ZWQnKQpwbHQudGl0bGUoJ1N5c3RlbSBFcnJvciBQcmVkaWN0aW9uJykKcGx0LnhsYWJlbCgnRGF0ZScpCnBsdC55bGFiZWwoJ0Vycm9yIENvdW50JykKcGx0LmxlZ2VuZCgpCnBsdC5ncmlkKFRydWUpCnBsdC5zaG93KCk=


Tuesday, May 16, 2023

Dictionary with depth and width

 Nice code for creating dict with depth and width


def create_nested_dict(depth, width):

    if depth == 0:

        return {}


    nested_dict = {}

    nested_dict['level'] = depth

    nested_dict['data'] = {}

    

    for i in range(width):

        nested_dict['data'][f'key_{i}'] = create_nested_dict(depth - 1, width)

    

    return nested_dict


# Set the desired depth and width of the nested dictionary

depth = 3

width = 2


# Create the nested dictionary

nested_dict = create_nested_dict(depth, width)


# Print the nested dictionary

import pprint

pprint.pprint(nested_dict)

Sunday, May 14, 2023

MS Research: What are some of the best alogorithms for log file anslysis

ARIMA (AutoRegressive Integrated Moving Average): ARIMA models are a class of linear models that can capture trends and seasonality in time series data. They can be used for forecasting log file metrics such as request volume, error rates, and response times.


LSTM (Long Short-Term Memory): LSTM is a type of recurrent neural network that is well-suited for modeling sequences of data with long-term dependencies. They can be used for forecasting log file metrics such as network traffic, resource utilization, and user behavior.


Prophet: Prophet is a forecasting library developed by Facebook that is designed for time series data with strong seasonal patterns. It can be used for forecasting log file metrics such as web traffic, page views, and user activity.


Holt-Winters: Holt-Winters is a triple exponential smoothing method that can be used for forecasting time series data with trends and seasonality. It can be used for forecasting log file metrics such as system performance, application usage, and user engagement.


VAR (Vector Autoregression): VAR is a multivariate time series model that can capture dependencies between multiple variables. It can be used for forecasting log file metrics such as resource allocation, system utilization, and user interactions.

references:

MS Research - Timeseries for Log analysis

This one is a good research paper 

Our goal is to develop models for the analysis of searchers’

behaviors over time and investigate if time series analysis is a valid method for predicting

relationships between searcher actions. Time series analysis is a method often used to

understand the underlying characteristics of temporal data in order to make forecasts. In

this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we

employed a basic analysis and found that 10% of searchers clicked on sponsored links.

However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links,

with almost no clicks on sponsored links. In the second and more extensive phase, we used

a one-step prediction time series analysis method along with a transfer function method.

The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of

a searcher session is approximately 2.9 interactions and that this average is consistent

across time periods. Most importantly, our findings shows that searchers who submit

the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss

implications, including predictive value, and future research

references:

https://faculty.ist.psu.edu/jjansen/academic/jansen_time_series_analysis.pdf

What is TensorFlow Gradient Tape

The most useful application of Gradient Tap is when you design a custom layer in your keras model for example--or equivalently designing a custom training loop for your model.

If you have a custom layer, you can define exactly how the operations occur within that layer, including the gradients that are computed and also calculating the amount of loss that is accumulated.

So Gradient tape will just give you direct access to the individual gradients that are in the layer.

Here is an example from Aurelien Geron's 2nd edition book on Tensorflow.

Say you have a function you want as your activation.

 def f(w1, w2):

     return 3 * w1 ** 2 + 2 * w1 * w2

Now if you want to take derivatives of this function with respec to w1 and w2:

w1, w2 = tf.Variable(5.), tf.Variable(3.)

with tf.GradientTape() as tape:

    z = f(w1, w2)

gradients = tape.gradient(z, [w1, w2])

So the optimizer will calculate the gradient and give you access to those values. Then you can double them, square them, triple them, etc., whatever you like. Whatever you choose to do, then you can add those adjusted gradients to the loss calculation for the backpropagation step, etc.

references


Friday, May 12, 2023

Docker install specific version on linux

sudo dnf --refresh update

sudo dnf upgrade

sudo dnf install yum-utils

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin


If a specific version to be installed, below can be done 

yum list docker-ce --showduplicates | sort -r


to remove older versions, below needs to be done 

sudo yum remove -y docker-ce docker-ce-cli


[cloud_user@info2c ~]$ yum list docker-ce --showduplicates | sort -r

docker-ce.x86_64                3:20.10.2-3.el8                 docker-ce-stable

docker-ce.x86_64                3:20.10.1-3.el8                 docker-ce-stable

docker-ce.x86_64                3:20.10.0-3.el8                 docker-ce-stable

docker-ce.x86_64                3:19.03.14-3.el8                docker-ce-stable

docker-ce.x86_64                3:19.03.13-3.el8                docker-ce-stable



now to install the specific version

sudo yum install docker-ce-20.10.2 docker-ce-cli-20.10.2 containerd.io


references:

https://ostechnix.com/install-docker-almalinux-centos-rocky-linux/



Wednesday, May 10, 2023

Python how to get the caller file name and line number

 import inspect

def info(msg):

    frm = inspect.stack()[1]

    print('frm is ',frm)

    mod = inspect.getmodule(frm[0])

    print(mod.__name__)

    

info('hello')


references:

https://stackoverflow.com/questions/1095543/get-name-of-calling-functions-module-in-python

What is double and single underscore meaning in Python

 While none of this is strictly enforced by python, the naming convention of a double underscore means "private", while a single underscore means "protected".


A double underscore is meant to protect subclasses from causing errors by using the same name. By namespacing them by class name, the defining class can be sure its variables will remain valid.


A single underscore means that subclasses are free to access and modify those same attributes, being shared with the super class.


Both forms suggest that any outside class should not be accessing these.


class A(object):


    __private = 'private'

    _protected = 'protected'


    def show(self):

        print self.__private 

        print self._protected


class B(A):

    __private = 'private 2'

    _protected = 'protected 2'


a = A()

a.show()

#private

#protected


b = B()

b.show()

#private

#protected 2

This example shows that even though class B defined a new __private, it does not affect the inherited show method, which still accesses the original superclass attribute. _protected is however modified and the superclass show method reflects this, because they are the same attribute.


references:

https://stackoverflow.com/questions/12117087/python-hide-methods-with

Monday, May 8, 2023

How to provide a custom Filter for the Python logging

# server.py
import uuid
import logging
import flask
from flask import Flask

def get_request_id():
    if getattr(flask.g, 'request_id', None):
        return flask.g.request_id

    new_uuid = uuid.uuid4().hex[:10]
    flask.g.request_id = new_uuid

    return new_uuid

class RequestIdFilter(logging.Filter):
    # This is a logging filter that makes the request ID available for use in
    # the logging format. Note that we're checking if we're in a request
    # context, as we may want to log things before Flask is fully loaded.
    def filter(self, record):
        record.req_id = get_request_id() if flask.has_request_context() else ''
        return True


logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

# The StreamHandler responsible for displaying
# logs in the console.
sh = logging.StreamHandler()
sh.addFilter(RequestIdFilter())

# Note: the "req_id" param name must be the same as in
# RequestIdFilter.filter
log_formatter = logging.Formatter(
    fmt="%(module)s: %(asctime)s - %(levelname)s - ID: %(req_id)s - %(message).1000s"
)
sh.setFormatter(log_formatter)
logger.addHandler(sh)

app = Flask(__name__)


@app.route("/")
def hello():
    logger.info("Hello world!")
    logger.info("I am a log inside the /hello endpoint")
    return "Hello World!"


if __name__ == "__main__":
    app.run()

references:

https://stackoverflow.com/questions/69363431/add-session-id-or-some-unique-id-per-session-in-python-logs

What is Union in python?

Union here has nothing to do with C or the kind of unions found in C. It just means that you can provide either a str or a Path for the filename argument... whatever a Path is in this context.

Generally if you see a function you have no idea what arguments it takes, so the typing library (and syntax) gives you some possibilities to give some information there. If you want to say "give me either a string or a path, but nothing else" Union is the thing to do that. 

references:

https://stackoverflow.com/questions/40816798/how-to-provide-argument-as-union

a good example of python logger wrapper

This one is pretty much sophisticated one.  

references:

https://bbengfort.github.io/2016/01/logging-mixin/

What are the args and kwargs in python

*args (Non-Keyword Arguments)

**kwargs (Keyword Arguments)


he special syntax *args in function definitions in Python is used to pass a variable number of arguments to a function. It is used to pass a non-keyworded, variable-length argument list. 


The syntax is to use the symbol * to take in a variable number of arguments; by convention, it is often used with the word args.

What *args allows you to do is take in more arguments than the number of formal arguments that you previously defined. With *args, any number of extra arguments can be tacked on to your current formal parameters (including zero extra arguments).

For example, we want to make a multiply function that takes any number of arguments and is able to multiply them all together. It can be done using *args.

Using the *, the variable that we associate with the * becomes iterable meaning you can do things like iterate over it, run some higher-order functions such as map and filter, etc.

The special syntax **kwargs in function definitions in Python is used to pass a keyworded, variable-length argument list. We use the name kwargs with the double star. The reason is that the double star allows us to pass through keyword arguments (and any number of them).


A keyword argument is where you provide a name to the variable as you pass it into the function.

One can think of the kwargs as being a dictionary that maps each keyword to the value that we pass alongside it. That is why when we iterate over the kwargs there doesn’t seem to be any order in which they were printed out. 

references:

https://www.geeksforgeeks.org/args-kwargs-python/


Wednesday, May 3, 2023

Python Root Logger and problems

Let’s look at the below code:


# 1. code inside myprojectmodule.py

import logging

logging.basicConfig(file='module.log')


#-----------------------------


# 2. code inside main.py (imports the code from myprojectmodule.py)

import logging

import myprojectmodule  # This runs the code in myprojectmodule.py


logging.basicConfig(file='main.log')  # No effect, because!

Imagine you have one or more modules in your project. And these modules use the basic root module. Then, when importing the module (‘myprojectmodule.py‘), all of that module’s code will run and logger gets configured.


Once configured, the root logger in the main file (that imported the ‘myprojectmodule‘ module) will no longer be able to change the root logger settings. Because, the logging.basicConfig() once set cannot be changed.


That means, if you want to log the messages from myprojectmodule to one file and the logs from the main module in another file, root logger can’t that.


references:

https://www.machinelearningplus.com/python/python-logging-guide/


kubectl restartng pods


kubectl scale deployment base-migrate --replicas=0 -n bpa-ns deployment.apps/base-migrate scaled

kubectl scale deployment base-migrate --replicas=1 -n bpa-ns deployment.apps/base-migrate scaled

kubectl rollout restart deployment db-migrate -n test-ns

 references:

https://loft.sh/blog/how-to-restart-pods-in-kubectl-a-tutorial-with-examples/

Monday, May 1, 2023

What is keep alived

keepalived is a framework for both load balancing and high availability that implements VRRP. This is a protocol that you see on some routers and has been implemented in keepalived. It creates a Virtual IP (or VIP, or floating IP) that acts as a gateway to route traffic to all participating hosts. This VIP that can provide a high availability setup and fail over to another host in the event that one is down.


references:

https://docs.technotim.live/posts/keepalived-ha-loadbalancer/#:~:text=keepalived%20is%20a%20framework%20for,traffic%20to%20all%20participating%20hosts.


Kubernetes Getting to know the ingress controller

In a nutshell, an ingress controller is a reverse proxy for the Kubernetes universe. It acts as a reverse proxy, routing traffic from the outside world to the correct service within a Kubernetes cluster, and allows you to configure an HTTP or HTTPS load balancer for the said cluster.


To better understand this, let’s take a step back first and look at the Ingress itself. A Kubernetes Ingress is an API object that determines how incoming traffic from the internet should reach the internal cluster Services, which then in turn send requests to groups of Pods. The Ingress itself has no power over the system — it is actually a configuration request for the ingress controller.


The ingress controller accepts traffic from outside the Kubernetes platform and load balances it to Pods running inside the platform, this way adding a layer of abstraction to traffic routing. Ingress controllers convert configurations from Ingress resources into routing rules recognized and implemented by reverse proxies.


Ingress controllers are used to expose multiple services from within your Kubernetes cluster to the outside world, using a single endpoint — for example, a DNS name or IP address —  to access them. Specifically, ingress controllers are used to:


Expose multiple services under a single DNS name

Implement path-based routing, where different URLs map to different services

Implement host-based routing, where different hostnames map to different services

Implement basic authentication or other access control methods for your applications

Implement rate limiting for your applications

Offload SSL/TLS termination from your applications to the ingress controller

references:

https://traefik.io/blog/reverse-proxy-vs-ingress-controller-vs-api-gateway/#:~:text=Long%20story%20short%2C%20ingress%20controllers,network%20proxy%20tailored%20for%20microservices.