Click to learn more about author Ron Bennatan.
If you are responsible for your company’s database security and compliance program, chances are you started this project when most database workloads were on-premises, and your databases lived inside your own data centers. Most likely, you are also embarking on or already in the middle of a massive shift where these databases are moving into some public cloud — usually Amazon AWS, Microsoft Azure, or Google Cloud Platform (GCP).
On the one hand, a database is a database no matter where it lives. This is more true for the application developer and the DBA than it is for infrastructure people and for those of us that are responsible for securing the database because the security controls and security “plumbing” that are always a part of the cloud’s services make a big difference. There is a learning curve, and you need to be aware of that and factor it into your migration timeline, so you’re not caught off guard.
Here are five important things that you should keep in mind when securing your migration of databases to the cloud, but before I jump in — one clarification. There are two main ways you can have a database running in a cloud. You can stand up an instance (e.g., an EC2 instance on AWS) and install your database on the Linux or Windows operating system just like you install it on a VM running on-prem (i.e., you are using the cloud for IaaS). While there are some services you can take advantage of (e.g., you can use an AWS agent to forward your database logs into Cloudwatch), to me, this is not really a database in the cloud.
It is no different than on-prem — you’re just running it in a different data center. The second option is to get the database as a service — e.g., RDS or RedShift as an AWS service or SQL Azure, GCP BigQuery, or Snowflake. You are “renting” capacity for a database as a service and get way more services directly from the cloud. This DBaaS pattern is what I think of when I think of migrating database workloads to the cloud, and the list of five refers to DBaaS.
1. Public Cloud Is Both More Secure and Less Secure Than On-Premises
Huh?!? What does that even mean? Let me explain. Public clouds have been built from the ground up in the last decade or so, and there are many reasons why they are actually more secure than 90 percent of all corporate data centers. The technology they use is almost always newer and more recent than what most companies can afford, and the levels of standardization and consistency are orders of magnitude higher.
There is no “bespoke” in cloud environments — everything is the same, and, therefore, there are fewer chances for mishaps due to one-offs. The people who built them usually have deeper technical expertise and a more robust security mindset. Cloud providers also understand that any security incident will create such great damage to their business that the stakes cannot be higher. Cloud vendors provide resiliency levels that most companies cannot even dream of. Other examples are patching and vulnerabilities. The cloud provides patch systems for you — they don’t even ask you about this, and you cannot ask them not to do it. That’s the way it should be.
The same is true also in DBaaS environments. Things are consistent, standard, and built-in — so more secure. Patches are applied for you. Versions are recent (I recently worked with a customer who’s still running Oracle 7! You will not find things like that on public clouds!). So why do I say it’s “both more secure and less secure?” From a technology perspective, the cloud is far more secure. Where the “less” comes in is in the “human factor.” Many companies are in transition mode and moving workloads to the cloud. Very often, the people responsible for the database and the application layers are not experts in cloud services, and this is where you have to be careful. Mistakes can be made, and humans are always the weakest link when it comes to information security, so the more you adhere to templates, best practices, or tools that have security controls for DBaaS, the less chance for human error.
2. Database Security Must Be “Baked into” Your Database Delivery Stack and Be Fully Automated
Cloud environments and DBaaS specifically are dynamic environments and ephemeral by nature. Application teams love clouds because they can spin up new databases instantly. But that means that any security initiative that relies on fixed lists of hosts, IPs, users, or tuples of attributes is doomed to fail — and pretty much all implementations of database security that were done for on-prem systems look like that.
In the cloud, things need to be “discovered” not “defined.” Don’t try to manage lists — try to manage a process where when a database comes up, your security solution automatically finds it and applies your security controls to it automatically. And, by the way — once you understand that this is possible in the cloud, you should also understand that it is also possible on-prem — and that applying these automation principles to on-prem environments as well will not only make you more secure but will take a huge burden off of you from an upkeep perspective. Enough with the lists!
3. Too Much Variety in DBaaS
Even though the cloud creates standardization, there is still a lot of (maybe too much?) variety in how DBaaS looks and what it takes to secure it. First of all, the cloud brings with it an explosion of database types. Whereas ten years ago, most companies had 3-4 database types, today it is in the tens and growing. It’s not all related to the cloud — it’s often coincidental with the cloud. NoSQL, big data, and specialized databases are used by everyone now, and DB Engines track usage for over 350 database types. But it’s not just timing (for example, that NoSQL and Cloud adoption are part of the same decade). Many databases are only offered as a DBaaS service on clouds — examples include AWS Redshift, GCP BigQuery, Snowflake, AWS DocumentDB, AWS DyamoDB, and Azure CosmosDB.
But do all these services look the same? No. Some only exist in the cloud and thus are very tightly integrated into the cloud fabric. But some are really “a database” that is offered as a cloud service but “inside looks normal.” Examples include RDS Oracle, SQL Server, MySQL, and PostgreSQL. Oracle on RDS, for example, is just an Oracle which Amazon has instrumented so it can be controlled and managed using the cloud fabric. This means that each database type running on the cloud has variants in terms of how you apply security controls, and this makes it harder for you — remember point 1. Standardization is good. Being able to get the security controls consistent, regardless of what the database type, is important — not to mention if you need consistency across multiple clouds or even across multiple clouds and on-prem!
4. Consider the Plumbing Issues
If uniformity is your friend and variation is your enemy, then consider also the “pipes” that you will need to manage when you do things like monitoring and auditing. In some clouds, things are quite uniform — for example, in Azure, almost everything flows through Event Hubs, and in GCP, almost always through some combination of StackDriver and PubSub. But in AWS, it’s a bit messier. Some things go through CloudWatch logs, others through CloudTrail, and others through Database Activity Streams (DAS). Some produce files in S3, and some just live inside the database. You need to be aware of these pipes or use some tool that manages these pipes for you for database security because you cannot use agents or proxies for cloud access (they just don’t work for clouds at all). Sometimes you even have multiple options for doing something — i.e., you can choose which “plumbing” to use — but you need to understand the pros and cons with each option.
5. Cloud Isn’t Cheaper If You Don’t Pay Attention
And finally, all these decisions can even affect your costs. You pay for everything in the cloud (and it’s very expensive when you add it all up). You should understand all the things you are charged for and what options you have, and sometimes costs will affect your decision. As an example, in some of the RDS variants, you can choose an auditing approach that uses CloudWatch logs and an auditing approach that does not. The security/compliance needs are both satisfied in an almost identical manner, but the costs are different. You pay something like $.50 per GB collected in a CloudWatch log. If your database produces 2 GBs per day, and you have 1000 RDS instances, you’ll be paying around $360K per year just for pushing the logs through this pipe to your database security tool. But if this tool knows how to pull it directly from the database, or you go through S3, you can save this cost.
There are, of course, many other important things to know and consider when implementing database security for DBaaS. But to me, the keys are uniformity and standardization. In the same way that I think the cloud is more secure than non-cloud in general, I think that an implementation that uses a single set of controls for all databases — whether on one cloud, on many clouds, and/or on-prem — not only makes the transition to the cloud easier and faster but also elevates security by a lot.