Saturday, December 31, 2022

What is Kong Datastore

Kong uses an external datastore to store its configuration such as registered APIs, Consumers and Plugins. Plugins themselves can store every bit of information they need to be persisted, for example rate-limiting data or Consumer credentials.


Kong maintains a cache of this data so that there is no need for a database roundtrip while proxying requests, which would critically impact performance. This cache is invalidated by the inter-node communication when calls to the Admin API are made. As such, it is discouraged to manipulate Kong’s datastore directly, since your nodes cache won’t be properly invalidated.


This architecture allows Kong to scale horizontally by simply adding new nodes that will connect to the same datastore and maintain their own cache.


The supported datastores are 


Apache Cassandra

PostgreSQL


Scaling of Kong Server 


Scaling the Kong Server up or down is fairly easy. Each server is stateless meaning you can add or remove as many nodes under the load balancer as you want as long as they point to the same datastore.


Be aware that terminating a node might interrupt any ongoing HTTP requests on that server, so you want to make sure that before terminating the node, all HTTP requests have been processed.


Scaling of Kong Datastore

Scaling the datastore should not be your main concern, mostly because as mentioned before, Kong maintains its own cache, so expect your datastore’s traffic to be relatively quiet.


However, keep in mind that it is always a good practice to ensure your infrastructure does not contain single points of failure (SPOF). As such, closely monitor your datastore, and ensure replication of your data.


If you use Cassandra, one of its main advantages is its easy-to-use replication capabilities due to its distributed nature.


references:

https://konghq.com/faqs#:~:text=The%20Kong%20Server%2C%20built%20on,before%20proxying%20the%20request%20upstream.&text=for%20proxying.,Kong%20listens%20for%20HTTP%20traffic.

No comments:

Post a Comment