Advertisement

Data Warehouses and GPUs: Big Data at High Speed

By on

Data Warehouses and GPUs“Three years ago it was tough to tell the market that they should put a Data Warehouse on top of something that runs on top of GPUs,” said Ami Gal, CEO and co-founder of SQream. “Now it’s clear that GPUs are storming A.I., Machine Learning, and data centers,” and GPU technology has become an accepted way to run queries on massive datasets.

GPU is an acronym for Graphics Processing Unit, first designed by Nvidia to speed up the production of graphics and video for gaming in 1999. Shortly thereafter, Gal wondered if it was possible to put that superior speed to use running a database. “We thought it would be cool if we could effectively run a SQL query on a GPU,” something that was considered impossible at the time. “And when you say to an Israeli engineer, ‘Oh, it’s impossible,’ you only give him motivation to work on it.”

But Gal wasn’t satisfied with just proving that it could be done. Seeing the potential for changing the playing field, he thought that if he could effectively run a SQL query on a GPU with 5000 computing cores then “everybody will use it.”At that time, a GPU had all 5000 highly parallel computing cores on the same chip, designed to do gaming, image-rendering, and vector processing. His theory was, “If you can have the GPU continue to work the same way it does and make it think it’s still working on vectors,” but instead, the vectors are in a Data Warehouse with the query on top, “then you can do 5000 compute situations in a certain fraction of the time.”

GPU architecture is completely different from CPU architecture, he said, and in the beginning when they tried to implement other databases on top of GPUs, they couldn’t get the performance they wanted from databases designed for a CPU. “We had to build our own database specifically for the architecture of GPUs.”

After three or four years working with a very small budget, exploring different technologies, different architecture, and different innovations, he found the right architecture to support large datasets. “This was key for us,” he said.

Why GPUs?

“Today if you want to crunch very large datasets, you need a lot of talent, you need a lot of money, and you need to work a lot,” or, you need to run on GPUs, he said. He outlined four criteria for success with GPU technology:

  • The ability to run queries on massive datasets: Easily scaling from 500gb to 40tb
  • High performance: Cutting query time from hours or days down to seconds and minutes
  • Low cost, small-scale hardware footprint: Two machines can do the work of eight racks
  • Easy to use: No specialized training needed to be successful

“GPUs are amazingly built for parallel processing and there are many ways to use GPUs today in order to run processes.” GPUs operate at lower frequencies than CPUs, and typically have many, many more cores.

SQreamDB


SQream Technologies was founded in 2010, and debuted SQream DB in 2014 as a GPU database for fast scalable SQL Analytics. According to the company:

“SQream DB is a full-featured GPU-accelerated Data Warehouse, capable of handling the most complex queries. Because SQreamDB uses standardized SQL, common language bindings, and standard hardware, Deep Learning technologies such as TensorFlow and Theano work “hand in glove” with SQream, reducing modeling and experiment time.”

SQream DB is ANSI-92 SQL and ACID compliant and interactions are the same as with other RDBMS. Users can query directly or through a connector like ODBC or native Python, he said. SQL commands are parsed and converted to Relational Algebra for further processing and optimizations. By converting SQL queries into clever, highly parallelizable relational algebra operations, SQream DB can efficiently perform complex operations on massively parallel GPU cores.

“SQream DB is a scalable columnar database designed to optimize the strengths of GPU,” said Gal. Scalability is achieved by “chunking,” a process of hyper-partitioning data in multiple dimensions,

“And is automatically and transparently done during ingest. Users query and interact with all of their data, just like a regular table, while SQream DB tables seamlessly grow to sizes that other databases can’t support.”


This linear scaling capability is particularly effective in the multi-terabyte range where scaling with CPUs is not cost effective. Compute and storage are separated, allowing for independent scaling of each. “SQream DB can ingest flat files like CSVs and Parquet, and via network sources like Spark or JDBC,” said Gal. It can be used by itself, or with third party integration and ETL tools, and can run on-premise or in the Cloud, the company said. SQream DB uses AI-assisted auto-compression to determine optimal compression for customer data — an industry first.

Use Case

Gal discussed a specific use case. A Telecom network operator with over 40 million subscribers wants faster Business Intelligence so they can improve customer satisfaction. Currently using an MPP Data Warehouse consisting of 40 compute nodes in five racks, the company started by analyzing several months’ worth of call data records, customer profiles, and customer-registered product information totaling approximately 14TB. Their existing system took just over two minutes to complete simple and conditional queries, and two and a half minutes for complex queries, he said. “SQream DB was able to complete the simple and conditional queries in ten to twelve seconds, and the complex query in just over 30 seconds,” he noted. In a ten-step report generation process, SQream took just over eight minutes to complete a process that took up to three hours using the customer’s current system.

The Future of GPUs

Moving forward, Gal predicts a new way of computing, with expanding capabilities for GPUs, and a convergence of data crunching, Data Science, and AI running on top of GPUs.

“I think we’re going to see much more of these coming in the next few years and new computing methodologies fostering even more. This is how we see the future in the next five to ten years.”

Due to current discussions about quantum computing, he said, “I think that in ten years we’re going to see Data Lakes and Data Warehouses are being queried, running on top of quantum machines.”

Gal said that they are now partnering with Alibaba Cloud, the Cloud Computing arm of Chinese multinational conglomerate Alibaba Group. SQream will allow Alibaba’s customers access to reliable, scalable, large-scale data storage up to several petabytes, according to a SQream press release. “They were testing every technology on the planet and we’re very happy that they chose us.” In addition, Alibaba decided to invest in SQream’s B round, he said.

Gal found ways to use GPU technology to address weaknesses of Data Warehouses such as speed, hardware costs, and footprint. For SQream’s clients, an important ingredient for meeting KPIs is the amount of data that can be loaded into the Data Warehouse per hour, he said.

“A lot of our use cases are talking about two terabytes an hour. We’ve been able to double that size to between four and six terabytes an hour, which is a massive improvement for many of our clients,”

They are now running queries 15 times faster specifically on complex multi-table JOINs. SQream’s latest release includes further performance enhancements on data ingestion.

As Gal predicted years ago, running a SQL query on top of a GPU is not impossible. “In the early days, when we started the company and had a lot of optimism, it was a good thing that we didn’t listen to all the naysayers.”

 

 

Image used under license from Shutterstock.com

 

Leave a Reply