Today, a large majority of modern apps or services are RESTful APIs and use API definitions to facilitate communications between them, as it saves us from having to worry about the language and the underlying implementation of the remaining components. APIs make even more sense in microservice or serverless architectures with dozens or hundreds of mutually interacting microservices/functions.
What is an API Gateway?
An API Gateway is the component responsible for unifying the publication of APIs so that they can be used by other applications or developers. It implements on a software product what has been generically called “API Management” in a post by our colleagues in Paradigma Digital:
According to the post’s author, an API Gateway typically consists of the following elements:
- An API Interchanger. A component whose main function is enabling the connection between services and clients.
- API Manager: Allows configuring and publishing APIs in the API Gateway component.
- API Dashboard: Gathers all the necessary information that clients need on published APIs.
Features of an API Gateway
- Routing: Sending requests to different destinations depending on the context or message content.
- Transformation: Components responsible for transforming or masking data.
- Inbound and outbound traffic monitoring.
- Safety policies that add authentication, authorization, and encryption to APIs.
- Use Policies: They enable the creation of consumption, performance and failure policies to secure the SLAs.
Without getting into the benefits that APIs provide, we’re going to discuss two tools for defining and managing APIs, simply and efficiently.
Kong was born in 2011 as a private API Gateway developed by Kong Inc.(formerly Mashup) based on the Nginx HTTP server with a clear focus: To offer high performance. In 2015 it became an open-source project.
Today, it’s used by over 5000 organizations.
There are two Kong modalities:
- Community Edition: Kong’s CE version boasts a comprehensive range of functionalities, including open-source plugin support, load balancing and service discovery, bud does not include a management panel. Therefore, we will need to configure Kong via REST, or using an open-source dashboard, such as Konga or Kong Dashboard.
- The Enterprise Edition features expanded out-of-the-box functionalities, such as the management dashboard, security plugins, metrics and 24×7 support, to name a few.
You can check the complete list of differences between Kong CE and EE.
Kong can be deployed in different ways, both on on-premise infrastructures or in the cloud. Kong also offers native Debian, Red Hat and OS X packages, Docker and CloudFormation for AWS to name a few.
Kong’s team proposes the following reference architecture.
- Kong Server: This component acts as a proxy for all requests. It consists of a public layer through which all requests for accessing the APIs it exposes are funneled, and a private layer for managing and configuring those APIs. Also, it allows us to enable, disable and configure the installed plugins.
- Kong Datastore: an external database where all Kong configurations are stored, along with their plugins or APIs. The datastores supported by default are Cassandra and PostgreSQL. Important: Kong uses its own cache memory to run. However, in certain cases, some plugins such as rate-limiting, require additional components such as Redis.
Kong has spawned a thriving plugin ecosystem, with open-source and enterprise plugins. In some cases it can be the same plugin, but with limited functionality in the case of the open—source distribution.
We can find different plugins such as LDAP authentication, CORS, Dynamic SSL, AWS Lambda, Syslog and many more.
And if we can’t find what we need, we can always build our own plugin. Being based on Nginx, Kong is equipped with the OpenResty package and allows us to create plugins using Lua.
We can always build our own plugin for Kong using Lua.
Kong easily scales horizontally. All it takes is to add more nodes and connect them to the database. We need to take into account that the database is the only failure point. Therefore, if we want to ensure high availability, we will need to replicate it.
If we want to use Tyk, we can choose different flavors: Cloud, Hybrid (GW in own infrastructure) and On-Premises.
It is worth noting that with the current cloud version we can have an API Gateway capable of handling 50,000 daily requests for free, while the on-premises modality allows us to keep an instance running for free.
There are several ways of installing Tyk: standard Ubuntu and RHEL packaging, tarball or Docker container.
Tyk is made up by 3 components:
- Gateway: The proxy that handles all our apps traffic.
- Dashboard: The interface from which we can manage Tyk, display metrics and organize the APIs.
- Pump: The element responsible for persisting the data of the metrics, and export them to MongoDB (out-of-the box installation), ElasticSearch, or InfluxDB, among other.
Tyk supports add-on based integration with different authentication mechanisms such as LDAP, OAuth, etc. It also offers the possibility of setting up traffic quotas, versioning, imports from Swagger or API Blueprint, even integration with service discovery systems such as Consul or ETCD, as well as balancing between different clusters or data centers.
If we have a requirement that the standard plugins don’t cover, we can create our own plugin using Lua, Python or gRPC.
Scaling Tyk is as easy as creating another instance of Tyk Gateway and connecting it to the same database to preserve consistency. To provide high-availability, we can use MongoDB with replica set.
We have subject Kong and Tyk to a 100 concurrent user workload for 60 seconds and also increasing the number of backend instances to see how this would affect performance. The tests have been conducted using Siege.
The Hardware used for all tests has been:
- CPU: i7-6820HQ @ 2.70GHz
- RAM: 16GiB
- HDD: SSD 512GB
To run both Tyk and Kong we’ve used a Docker-based distribution and its corresponding databases.
In the case of Tyk we have used tokens because performance using authentication has been extremely poor, about 85 requests per second. It is possible that this has been caused by a bug in the version that we used in our tests.
As we have seen, Kong and Tyk feature a comprehensive range of out-of-the-box functionalities, in many cases more than sufficient for small environments consisting of 1-10 microservices without excessive concurrence.
However, if we want to deploy a significantly larger number of microservices, or simply ensure high-tolerance to failure, we need to scale up the number of gateways. In the case of Kong, all it takes is adding a new node and connecting it to the database. With Tyk we can do the same, although it will require us to pay for the service. This could be a determining factor when choosing what API Gateway to use.
Tyk open-source standard set of features includes control panel, metrics and logs. However, with Kong we need to resort to open-source alternatives if we want to have a free control panel. The same goes with metrics and logs, but we need to use plugins that require additional configurations.
Scaling out with Kong is as easy as spinning up new nodes and connect them to the database.
Tyk also offers Tyk Cloud, is a service that saves users the trouble of managing the infrastructure layer, surely a welcome feature for small businesses and startups.
Regarding performance, in our tests, the king was Kong. Kong outperformed Tyk in all tests, with authentication disabled and enabled, and was able to process twice the amount of requests in the second case.
Definitely, these are two very solid tools, which offer a comprehensive set of functionalities, each one with their own strengths and weaknesses. Regardless, using an API Gateway is the perfect ally for a microservice and container architecture.
We encourage you to send us your feedback to BBVA-Labs@bbva.com and, if you are a BBVAer, please join us sending your proposals to be published.