You are here:  Home  >  Conference and Webinar Communities  >  Current Article

NoSQL Now! Observations, Part1: Enterprise Adoption and Benchmarks

By   /  September 6, 2011  /  No Comments

By: Robert Greene

I just returned from the tremendous NoSQL Now! Conference in San Jose. The sessions were ripe with promising ideas and the floor was buzzing with discussion. As such, there were a few key themes over the three days.

Fitting NoSQL into the Enterprise:

First, I couldn’t help but notice there were a surprising number of enterprises in attendance. This is certainly indicative of a larger trend that NoSQL will play an important role in enterprise business solutions. However, in order to make adoption in the enterprise work, early generation NoSQL technology needs to evolve in order to be consistent with the needs of enterprise development and deployment while not sacrificing core NoSQL value delivered in the shift to a soft-schema. To that end, it was clear that transactions, standard-based API’s and robust production tooling are increasingly recognized as the needed features for next generation NoSQL products. Ultimately, solutions that deliver the core architecture shift ushered in by NoSQL, and that deliver enterprise-class tooling and API’s will be the likely winners in the inevitable consolidation within the space.

A Benchmarking Debate

Second, benchmarking was a hot topic. Analyst Robin Bloor summed it up nicely when he stated that, “the database is no longer a commodity. Do your own proof of concept to make sure the choice is the right one for your problem.” This statement rings true especially given that standard database benchmark tests are in some sense out of touch with reality. Existing industry benchmarks essentially only look at the cost of a piece of data moving in and out of a relational RPC, something that represents only a small percentage of the overall unit of work. However, the true cost of a unit of work in today’s computational environment involves several components, including programming language, object mapping, population, and storage in and out of the database. The cost of that operation is significantly impacted by the type of database itself and the inevitable work necessary to move raw data into the behavioral context. For example, if you are trying to tell the world that you are 40 times faster on something that only represents 10 percent of the total operational cost, then you have only really improved the system by some four percent under the very best conditions. That’s simply not good enough.

Because of this, new benchmarks are needed to reflect the true cost of computing, touching all areas impacted by the choice of database. This leads to the conclusion that some databases are more suitable than others for certain types of application requirements. For example, if it is pure hierarchical XML content without interlinking – moving straight from a webpage form to database – then quite likely an XML database or something similar would be the right choice. On the other hand, if you are dealing with richly linked, networked enterprise domain models then something like an object-oriented database is a better choice. Of course, there are other factors and other application profiles, but the key is to make sure you’re using the right tool for the job.

Stay tuned for part two of my NoSQL Now! observations.

Join the Forum discussion on this post

You might also like...

Data Science in 90 Seconds: K-Means Clustering

Read More →