Advertisement

The Data-Centric Zero Trust Paradigm

By on

Click to learn more about author Leonid Sandler.

Traditional cybersecurity product categories separate data protection products from network security and cloud workload protection. The reasons go back to the history of the cybertechnology evolution and the major technology players’ spheres of influence.

Ultimately, of course, the biggest motivation for any cybersecurity product across any of these categories is to protect information. Resources, user experience, and other parameters are also very important. However, nobody loses their job over a coin miner intrusion, while a data breach is a completely different story.

In the past, when cybersecurity was a green field, adding practically any type of protection was an improvement, simply because it was better than nothing. Now, many of these products clash, providing double coverage in some places and leaving holes in others, but they were there first. The combination of technology, regulations, priorities, and budgets over time has hardwired these products into our architectures and our minds. Who would dare to turn off the antivirus even if it burns 40 percent of your brand-new CPU when you need it?

Hopefully, next-generation, cloud-native technologies will change some of these legacy perspectives and drive a new paradigm; of course, 20 years from now, somebody will write a similar blog about us.

We already see significant changes in areas like networking, where the good old router and switch-based network segmentation are being replaced with SDN and identity-based cloud workload protection products. We see the introduction of powerful AI tools and g­­rowing support for various encryption capabilities for data at rest and in transit.

Still, some of the legacy stereotypes refuse to leave. We already invest significant effort establishing and protecting unique identities for every workload in our system, which enables us to define communication policies for them, e.g., workload A can talk to B but can’t talk to C. No matter where Kubernetes will launch each workload, its identity will accurately pinpoint the exact communication policy that has to be applied. This is the most fundamental part of the Zero Trust concept. Today, it covers API access verification, but no data access.

Why does data access have to be different? Why can’t the same identity define what data assets this workload can access? Why, when it comes to data, do we keep using keys, tokens, passwords, etc.?

A data-centric Zero Trust approach is designed to verify what software identity is allowed to access a particular data asset that is stored, transferred, or processed in the system. The most reliable method to protect data at rest is to encrypt it. Therefore, in order to implement the Zero Trust principle, we need to make sure that only workloads with the right identity will be able to obtain an appropriate key to decrypt this asset. Once decrypted, the workload may process this data and allow other workloads to access it via APIs. Therefore, it is important to have unified methods of data access and API access authorizations. This will help to visualize and control where particular data can go into the system.

It’s worth noting that the strong software identity mechanism not only helps to decide who can talk to whom and who can access a particular key, but it also ensures that only explicitly authorized software can process sensitive data after its decryption across the entire solution.

Of course, the overall robustness of such a solution is dependent on how reliably we can identify and authenticate workload software, especially during runtime, since most exploits happen when the software runs. It’s also critical to reliably protect the data encryption keys throughout their entire lifecycle and especially in-use, but this is a subject for another blog.

Stay home. Stay safe. Stay tuned.

Leave a Reply