What’s Changing in Key-Value Databases

By on

key-value db x300by Jelani Harper

Once, Key-Value database consumers were either extremely smart or extremely cheap. The former were technologically savvy enough to leverage the architectural strengths of this NoSQL store, while the latter had few options but to learn the considerably different terminology and technology required to make it work.

In their midst now arrives a third consumer, neither as technologically astute as the former nor as financially strapped as the latter, yet still willing to reap the benefits of combining NoSQL and relational technologies due to an ever-changing Data Management landscape most recently altered by:

  • The solidification of Big Data: Big Data is no longer a niche item, and is swiftly becoming an increasingly accepted part of the enterprise for business and operational processes.
  • The constant need for data: For many organizations, data is ingested and required all the time with a degree of continuity that makes latency and offline periods unacceptable.
  • The consumerization of IT: End users are frequently becoming the consumers of data and data driven processes – shifting the access to and utility of data away from the backrooms of IT and DevOps and into the hands of the business.

This third consumer (which may very well outpace the first two in the near future) has helped produce a profound effect on recent developments in the Key-Value database sphere by heralding a shift in this technology which has produced:

  • Increased SQL access
  • Support for transactional data
  • Meta table modeling
  • Greater integration capabilities
  • Advanced computations

According to Robert Greene, principle product manager/strategist for Oracle’s NoSQL database technology, such a change from a specialized technology reserved for the chosen few to one more widely embraced by the mainstream of data users was little more than inevitable:

“For this stuff to become more widely adopted and  moved into production, you have to get away from the small group of guys who were at or near the top of their class,and went into startups. Its necessary to get it to the average every day Joe working for a basic Fortune 2000 company on a regular project, but he has to think about how to make it work and he might get confused, so the technology is evolving to take on familiar concepts and terminology.”

The First  defined the Architecture

Despite recent developments to simplify the process of working with Key-Value data stores, the majority of their classic characteristics – some of which apply to NoSQL technologies in general – have survived and are continually sought by enterprises in a variety of industries. Key-Value databases were one of four NoSQL databases (in addition to document stores, graph databases, and column-style databases) to emerge to handle the speed, scalability, and variation of Big Data.

Key-Value stores accomplish this task by assigning data (regardless of its structure) to location-based storage nodes that correspond to the value of a particular key. By replicating nodes, these data stores are able to scale to extreme levels while providing a number of critical benefits including a high level of availability – across data centers and within the enterprise – as well as virtually unparalleled reliability based on celeritous recovery in case of node failure.

When one factors in the speed at which Key-Value stores function and the cost advantages of NoSQL options in general, it becomes apparent that the incorporation of greater SQL functionality only enhances, and does not negate, the need for this technology. Greene noted that:

“If we tell the story about what’s changing with NoSQL and so much of the story starts to sound like a relational database, I think it’s very important to talk about what isn’t changing and what was the big deal. It’s architecture. It’s the fact that Key-Value database architecture so naturally lends itself to scaling in a way that aligns with how people build out hardware and cloud infrastructure these days, and the way people put together their data centers.”

Non-Relational SQL: The Access Paradigm

Traditionally, Key-Value stores utilized range-based filters in a particular key space that would return values for data. This method was less precise than traditional declarative and imperative querying, which is why vendors have gradually extended data access (not data storage) support to conventional SQL based querying (which some vendors do with a subset of SQL based on their own particular database).

Subsequently, end users can not only leverage the conventional architectural boons of rapidity, availability, and scalability of Key-Value stores, but also query the data through a familiar relational language. Just as valuable is the fact that the increased SQL access also allows for Data Modeling in a conventional tabular representation – which most SQL users are already familiar with. Regardless of how the data is stored, the table meta model overlays the storage architecture and greatly reduces the complexity of modeling data for any number of applications. Deploying this methodology decreases time to production and results in more reliable application building.

The Emerging Big Data Management System: Integration

Another vital facet of the possibilities largely facilitated by increasing access to data through SQL-type languages pertains to integration. Accessing Key-Value store data through SQL readily enables users to aggregate it with relational data and that from traditional, proprietary sources so that they can apply the significance of one data set to another.

This capability is particularly useful considering the fact that various forms of data warehousing and storage options – including NoSQL, relational databases, and Hadoop – a have different workloads for which they are primed. However, there are frequently use cases for which data from these workloads must be combined and aggregated for the purpose of, say, analytics. As Greene observed, “Data never sits and serves one application; it always has a greater organizational utility. Having unified ways to access data regardless of its source becomes important.”

Transactions

One of the more eminent trends to impact Key-Value stores is increasing support for transactions and transactional data, which was typically limited in NoSQL options. However, a number of vendors have recently entered the marketplace announcing support for transactional data, heralding a movement in which virtually all of the major players (Oracle, Cassandra, Google) have introduced options for transactional support.

This change in Key-Value stores is another example in which the technology for these databases is helping to simplify their usability – in this case by making it substantially easier for developers to program. Greater transaction support and the deployment of table meta models for data modeling make application development considerably more viable for the enterprise, while the return to SQL access fuels a level of integration necessary for use cases when workloads require aggregation. Additionally, providing support for transactions substantially broadens the workload capabilities of Key-Value stores, increasing their utility for customers.

Cache Computations

The utility for Key-Value databases is also growing due to a greater emphasis on in-memory distributed computing. NoSQL databases can often cache data more effectively than many distributed cache systems due to NoSQL’s high level of reliability and consistency attributed to their chief architectural benefits,– which Greene says is responsible for their replacement of grid caching for many of Oracle’s customers. The recent emphasis on the distributed capabilities of NoSQL stores will likely result in more profound computation closer to where the data actually lives, which will lead to improved analytics, data grouping, aggregation, and algorithm processing.

NoSQL Today

An overview of some of the more influential developments in the Key-Value database space reveals that the NoSQL movement is enduring and, more importantly, evolving. Key-Value stores are adding functionality by providing growing access to SQL (and SQL-like languages and queries), conventional table meta models for improved data modeling, support for transactional data, and advanced computational processing due to their distributed tendencies. The integration capabilities of the burgeoning support for SQL enable better opportunities for analytics and search options.

All of these developments make Key-Value stores more accommodating to customers and help to simplify some of the crucial processes for their workloads – increasing their utility while preserving the core benefits of their low-cost, extremely reliable scalability necessary to utilize and integrate Big Data into daily business and operations processes. Greene commented that:

“Today, NoSQL Key-Value stores are  giving people a way to use set-based modeling techniques while  baking application data relationships right into their database storage models. That’s where they get the fast performance. But typically, you can only get so many use cases to fit that form well before you want to add more use cases that don’t fit the form and you need to extend your capability. Extending these Key-Value implementations tohave a set-based query capability on top of that data, enables one to add these extra use cases pretty easily. That ability to extend beyond the early Key-Value limitations is opening the way to more NoSQL backed applications ”

 

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept