Saturday, December 31, 2022

What is P99 Latency?

It's 99th percentile. It means that 99% of the requests should be faster than given latency. In other words only 1% of the requests are allowed to be slower.


Imagine that you are collecting performance data of your service and the below table is the collection of results (the latency values are fictional to illustrate the idea


Latency    Number of requests

1s         5

2s         5

3s         10

4s         40

5s         20

6s         15

7s         4

8s         1


The P99 latency of your service is 7s. Only 1% of the requests take longer than that. So, if you can decrease the P99 latency of your service, you increase its performance.


references:

https://stackoverflow.com/questions/12808934/what-is-p99-latency

What is the crond in Linux command

crond daemon or cron is used to execute cron jobs in the background. It is nothing but the daemon that handles and executes commands to run the cronjobs in accordance with the specified schedule. All the schedules and required corresponding commands are stored in the file “crontab”. The full directory path to access this file in Linux is /etc/crontab. In another word, the crond is a server daemon that performs a long-running process in order to execute commands at specified date and time as per the assigned Cron job. It is started during system startup from the /etc/rc.d/init.d/crond file. The cron program itself is located under /usr/sbin/crond.

reference:

https://www.servercake.blog/crond-linux/

What does /dev/tcp do?

Bash supports read/write operations on a pseudo-device file /dev/tcp/[host]/[port] [1].

Writing to this special file makes bash open a tcp connection to host:port, and this feature may be used for some useful purposes, for example:

Query an NTP server

cat </dev/tcp/time.nist.gov/13

reads the time in Daytime Protocol from the NIST Internet Time Service server.

Fetch a web page

this script

exec 3<>/dev/tcp/www.google.com/80

echo -e "GET / HTTP/1.1\r\nhost: http://www.google.com\r\nConnection: close\r\n\r\n" >&3

cat <&3

(echo > /dev/tcp/$HOST/$PORT) >/dev/null 2>&1

result=$?

references:

https://andreafortuna.org/2021/03/06/some-useful-tips-about-dev-tcp/

Kong difference between service mesh

API Gateways facilitate API communications between a client and an application, and across microservices within an application.  Operating at layer 7 (HTTP), an API gateway provides both internal and external communication services, along with value-added services such as authentication, rate limiting, transformations, logging and more. 


Service Mesh is an emerging technology focused on routing internal communications. Operating primarily at layer 4 (TCP), a service mesh provides internal communication along with health checks, circuit breakers and other services. 


Because API Gateways and Service Meshes operate at different layers in the network stack, each technology has different strengths. 


At Kong, we are focused on both API Gateway and Service Mesh solutions. We believe that developers should have a unified and trusted interface to address the full range of internal and external communications, and value added services. Today, however, API Gateways and Service Mesh appear as distinct solutions requiring different architectural and implementation choices. Very soon, that will change. 


references:

https://konghq.com/faqs#:~:text=The%20Kong%20Server%2C%20built%20on,before%20proxying%20the%20request%20upstream.&text=for%20proxying.,Kong%20listens%20for%20HTTP%20traffic.

What are Kong Plugins

Plugins are one of the most important features of Kong. Many Kong API gateway features are provided by plugins. Authentication, rate-limiting, transformation, logging and more are all implemented independently as plugins. Plugins can be installed and configured via the Admin API running alongside Kong.


Almost all plugins can be customized not only to target a specific proxied service, but also to target specific Consumers.


From a technical perspective, a plugin is Lua code that’s being executed during the life-cycle of a proxied request and response. Through plugins, Kong can be extended to fit any custom need or integration challenge. For example, if you need to integrate the API’s user authentication with a third-party enterprise security system, that would be implemented in a dedicated plugin that is run on every request targeting that given API.


There are several plugins at this page 


https://docs.konghq.com/hub/


references

https://konghq.com/faqs#:~:text=The%20Kong%20Server%2C%20built%20on,before%20proxying%20the%20request%20upstream.&text=for%20proxying.,Kong%20listens%20for%20HTTP%20traffic.

What is Kong Datastore

Kong uses an external datastore to store its configuration such as registered APIs, Consumers and Plugins. Plugins themselves can store every bit of information they need to be persisted, for example rate-limiting data or Consumer credentials.


Kong maintains a cache of this data so that there is no need for a database roundtrip while proxying requests, which would critically impact performance. This cache is invalidated by the inter-node communication when calls to the Admin API are made. As such, it is discouraged to manipulate Kong’s datastore directly, since your nodes cache won’t be properly invalidated.


This architecture allows Kong to scale horizontally by simply adding new nodes that will connect to the same datastore and maintain their own cache.


The supported datastores are 


Apache Cassandra

PostgreSQL


Scaling of Kong Server 


Scaling the Kong Server up or down is fairly easy. Each server is stateless meaning you can add or remove as many nodes under the load balancer as you want as long as they point to the same datastore.


Be aware that terminating a node might interrupt any ongoing HTTP requests on that server, so you want to make sure that before terminating the node, all HTTP requests have been processed.


Scaling of Kong Datastore

Scaling the datastore should not be your main concern, mostly because as mentioned before, Kong maintains its own cache, so expect your datastore’s traffic to be relatively quiet.


However, keep in mind that it is always a good practice to ensure your infrastructure does not contain single points of failure (SPOF). As such, closely monitor your datastore, and ensure replication of your data.


If you use Cassandra, one of its main advantages is its easy-to-use replication capabilities due to its distributed nature.


references:

https://konghq.com/faqs#:~:text=The%20Kong%20Server%2C%20built%20on,before%20proxying%20the%20request%20upstream.&text=for%20proxying.,Kong%20listens%20for%20HTTP%20traffic.

Advantages of using Kong

Compared to other API gateways and platforms, Kong has many important advantages that are not found in the market today. Choose Kong to ensure your API gateway platform is:


Radically Extensible

Blazingly Fast

Open Source

Platform Agnostic

Manages the full API lifecycle

Cloud Native

RESTful


What area Kongs main components 

Kong server & Kong datastore


Below is some details on the Kong server


The Kong Server, built on top of NGINX, is the server that will actually process the API requests and execute the configured plugins to provide additional functionalities to the underlying APIs before proxying the request upstream.


Kong listens on several ports that must allow external traffic and are by default:

8000


for proxying. This is where Kong listens for HTTP traffic. See proxy_listen.

8443

for proxying HTTPS traffic. See proxy_listen_ssl.

Additionally, those ports are used internally and should be firewalled in production usage:

8001


provides Kong’s Admin API that you can use to operate Kong. See admin_api_listen.

8444

provides Kong’s Admin API over HTTPS. See admin_api_ssl_listen.

references:

https://konghq.com/faqs#:~:text=The%20Kong%20Server%2C%20built%20on,before%20proxying%20the%20request%20upstream.&text=for%20proxying.,Kong%20listens%20for%20HTTP%20traffic.