This blog post is the first in a series aimed at better understanding the forces and drivers that shape modern API integration architectures. Feel free to connect with me and discuss your specific experiences related to DevOps teams and how to use central API gateways.
What is DevOps?
A decade ago in the context of rapid software releases, the term DevOps emerged as an attempt to frequently and reliably update a system in an operational state by enabling cross-functional collaboration between software developers and operations, and toolchains for automating building and deployment of applications.
As the DevOps concept matured over the years, certain characteristics have emerged as success factors:
- Software development team taking ownership and responsibility for deploying software changes to production.
- Greater emphasis on automation for delivery of software changes, bug fixes, and handling of production incidents.
- Keeping manual control gates (e.g. approval steps) to a minimum when deploying changes.
- Usage of production-like environments during development.
The DevOps team and their app is self-sustaining in the sense that the team does not rely on other parts of the organisation to push changes to production. The team has all the required skills for moving forward and is empowered to take decisions within the scope of their app. The picture shows a simple API integration architecture that is fully managed by the team.
DevOps teams want nothing in their way of pushing changes to production
The centrally deployed API gateway
At the heart of most API integration architectures, you will often find a colossus centrally deployed API gateway managed by a dedicated team – it has been there since the dawn of time – perhaps not as a dedicated API gateway but rather as a “do it all” integration platform implementing the enterprise service bus.
Regardless of what type of integration solution, modern DevOps teams want nothing in their way when pushing changes to production, especially not something which forms a dependency on other teams that may seriously impact their capability to rapidly deliver software changes.
When API delivery become a team-overlapping activity
API:s are an important interoperability enabler for any organisation whether it is for integrating applications within the boundaries of the organisation – or for enabling innovative cross-organisation, business models that fuel the new platform economy where value is derived from a deliberate orchestration of a potentially near-endless swarm of autonomous contributors.
Often when building an API, the work first involves figuring out the information requirements – i.e. what information needs to be exposed from the API. It is commonly followed by designing how the API should be structured and its behaviour – this is a challenge on its own, but at least the activity is somewhat isolated. The output of this API design activity is the API contract – typically described using an OpenAPI document. Of course, you are doing contract-first development, right?
Now that the API is described, it’s time to build it. Implementing the server-side of the API contract can be performed by the team without the involvement of other teams, but to get the API exposed, the team needs assistance from the API gateway team with setting up a proxy – the new entry point to the API. As a consequence, we now have a team-overlapping activity, a dependency, in another team for pushing changes to the production environment.
DevOps team: “Central API gateway – what’s in it for us?”
In this age of DevOps, team-overlapping activity raises questions. After all, the reason the DevOps concept came to be was to break down the barriers between development and operations teams to enable rapid releases of new software features, and quicker learn from user users. The DevOps team may question why use a central API gateway at all? What is the value proposition of API gateways when it just creates complexity and slow down the process? The picture below shows the inter-team dependency between a DevOps team and the API gateway team.
Why use a centrally deployed API gateway
To get a better understanding of the value API gateways brings, let us review some of the constraints the team would have been faced with if they, as an API provider, were directly connected to API consumers.
- Location dependency – The API provider is forced to continue to expose the API at a fixed location (URL). In the future the team may want to relocate the API provider’s implementation (e.g. move to a new cloud provider) or reshape the solution architecture, these types of activities may become harder without the flexibility to change the server’s URL where the API provider implementation is located.
- Uptime – Rolling out new software features to a production environment is not done in an instant and the time required may cause downtime which would disrupt connected API consumers.
- Manage load – The API provider needs to be able to manage the load by scaling up and that is often bound to hit a limit compared to scaling out.
The team would probably also have to consider implementing support for security patterns, traffic- and statistic API management features in a standardised way.
- Security – API consumers need to be identified (authentication), and access control (authorisation) is also needed to restrict access to API plans. Additionally, sometimes the API provider does not support modern security flows such as OAuth 2.0, so it makes sense to implement these outsides of the API provider.
- Backend protection – Rate-limiting, Spike-arrest, quota.
- Traffic management – Failure patterns like a circuit-breaker pattern, or retry policies.
- Statistics – Logs and metrics must be gathered in a unified way to fuel API product management capabilities.
There are quite a lot of features for the team to pack into their lightweight microservice architecture! The side-car deployment pattern and mesh services can support many of these requirements, but as we will see in later blog posts in this series, service meshes are still relevant – but not as the first gatekeeper between API consumers and API providers.
The additional integration layer provides flexibility in the integration architecture because it enables control of what should happen to API requests
Going back to the value discussion about API gateways. The probably single most important reason for using an API gateway is to create an abstraction layer between API consumers and API providers. The additional integration layer provides flexibility in the integration architecture because it enables control of what should happen to API requests. The abstraction layer results in decreased coupling between the API provider and the API consumer. And it matters because less coupling, the less integrating parties know about each other, the freer they are to evolve (change) on their own.
Sometimes the idea of exposing API:s through an API gateway is referred to as API virtualising because the actual location of the API provider’s implementation is transparent (no concern) for API consumers. So, the API gateway provides a solution to the location dependency mentioned earlier. However, like any piece in the architectural puzzle, the API gateway comes both with benefits and drawbacks, and these drawbacks need to be acknowledged and fixes worked out for mitigating their impact.
What about SPoF and API request performance?
When we funnel traffic through a single point in the architecture, that point risks becoming a single point of failure (SPOF). That is, if a part of a system fails, it will prevent the entire system from working, if not properly managed and HA-assured.
Another concern is that an API gateway adds extra hops in the network and processing, which will increase the latency of API requests. The time required to manage a request is highly impacted by what sort of processing is needed. Parsing a large request requires more time than simply identifying the request and routing it towards its destination. Nevertheless, from our experience, this additional latency seldom is the root cause of a poorly performing API. It is rather processing in backend apps that turn out to have a significant impact on API response time.
Instead of chasing technical performance bottlenecks, we often come to see how work is organised around central API gateways as a more determining factor in how the organisation is benefiting or struggling with the adoption of a central API platform.
Piecing it all together
Time to summarise the progress we have made. The centralised nature of enterprise-wide API gateways creates a solid platform for exposing API:s – granting the API provider much-needed freedom to evolve behind the scenes. The API gateway can also protect backend applications and offload application developers from having to implement features required for API management such as statistics and access control or enable new security patterns for modernising access to legacy applications.
Nevertheless, the centralised nature of enterprise-wide API gateways may be vulnerable to a single point of failures, increased runtime latency, and process performance degradation therefore from relying on a central team in team-overlapping activities.
Overall, the API gateway does add something to the party, but the way it is used may still slow down fast-paced DevOps teams too much. In the next post, we will investigate how a centrally deployed API gateway can be used in more efficient ways.
Also, if you’re interested, sign up for our webinar about how to implement a DevOps-friendly API architecture, read more.