Advertisement

When Adopting Containers, How Will Containerized SQL Apps Access Data?

By on

Click to learn more about author Ariff Kassam.

In every industry, businesses create products and services, address customers, and compete in the global market through applications. Applications are now critical to the success of most businesses, and when applications go down it can result in lost productivity, lost opportunities, damage to brand reputation, and data loss, and lead to payouts related to missed service level agreements. To control the consequences of potential outages, many organizations are considering moving to the cloud and distributed deployments to maintain availability despite inevitable failure events.

As they look towards the cloud, many enterprises are considering re-architecting applications to take advantage of new flexibility and functionality offered by microservices and containers. There’s been considerable excitement around containers over the last few years, and for good reason. Essentially, a container consists of an entire runtime environment bundled into a single package that includes: an application and its dependencies, libraries and other binaries, and the configuration files necessary to run it. Containers can improve security and provide predictable operation of individual processes on shared resources.

Using containers helps development teams to:

  • Leave the Database Out of the Container Environment

This solution sounds the simplest – the database is untouched and operates exactly as it does in a traditional environment. The advantage is that you maintain the status quo for your database and you know what to expect. However, your organization won’t gain agility benefits for the database and the application. Using a traditional database requires developers to go through traditional methods for getting access to a new database or making changes to an existing database. This doesn’t provide self-service to developers to enable agility because they still need to go through a database administrator (DBA) for installation, set up, operations, and changes to the database.

In addition, it can result in significantly more database instances for DBAs to manage if you adopt the practice of each microservice having its own database. You’ll also need to figure out how to connect a container application back to a database that’s running on separately managed infrastructure. This means that you’ll need to handle firewall configurations to handle open ports and address connectivity issues. You’ll also have to manage and maintain two separate environments.

  • Lift-and-shift the Database into a Container

Lift-and-shift essentially means that you’re moving the database from one environment to another. Moving a database into a container means that the database itself remains fundamentally the same. The advantages of this approach include the ability to retire on-premises server and storage, and a database is offered in the container environment. Developers can start and stop the database themselves (although a DBA will still need to manage the database) and improve processing efficiency because the apps and analytics accessing the data reside together.

There are disadvantages to this approach, including the need to match system resources for a “fat” container, and you’ll still need to scale up for performance. Because traditional databases are not optimized for containerized virtual environments, you can expect poor performance in this scenario. In addition, you’ll still need DBAs to manage the database, and you’ll find it more challenging to implement your high availability/data recovery data replication in this container environment.

  • Use a Cache for Data Management in Container Applications

This option provides low latency container-native data access, which is what you’re looking for. However, a cache introduces considerable complexity. It still relies on a traditional database in the back end, so the application must manage two data stores. One data store exists in the same environment as your containers and another resides outside of that environment, which means you must determine how to maintain consistency across both of these data stores. As a result, this approach has all the limitations of Option 1.

  • Use a Container-native Distributed SQL Database

Container-native distributed SQL databases are designed for dynamic and distributed environments like containers. Using this model, new database services, each in its own container, can be added on demand to increase throughput, add storage redundancy, reduce latency, or react to failures of existing processes. This can be used to automatically scale out or scale in database capacity to meet dynamic application requirements.

As your organization begins to adopt microservices and containers, it’s essential to take the time to review your database needs for your container deployment. You have many options, so it’s important to consider what kind of database management system fits the application you’re deploying and the environment you’re deploying in. When deploying containerized SQL applications, you need to ensure that you understand the level of consistency, flexibility, and scalability you need, and then choose the database option that delivers on those needs.

Leave a Reply