Big Data requires Big Data Centers. When you build a place to handle Big Data, you have many computers all working together. This raises considerations regarding using a common ground for all power cables, how much fuel to keep for the back-up generators, the air-conditioning fan duct orientation, proper heights of blade cabinets, and if you really need to run Fiber-Channel over Ethernet.
These physical considerations have usually been left to others, but if you are involved in Big Data in your Big Data Center, you are hip deep in these discussions. You are also looking for a good deal in 10 Gig switches. (If you find one, let us know.)
The reason why this is significant is that we are, as a culture, at a turning point in our hardware-software love affair. If I may, I would like to take you on a short stroll down memory lane and reflect on the absence of 5 Gigahertz computer chips. Actually, there were precious few that reached 4 Gigahertz. The problem was not that they did not work; the problem was that chips running that fast became so hot they required a huge cooling tower and multiple fans. Imagine a tiny Three-Mile-Island style cooling tower inside your PC.
A few computer gamers actually build such machines. They over-crank their CPUs outrageously and connect multiple video cards, each of which has one or more fans, to feed huge monitors showing HD level graphics while listening loudly to seven-channel audio. They need loud seven-channel sound to cover the whine of more than a dozen fans spinning at top speed to cool their computer. Many of these people appear brave in their combat games, but shrink with fear when opening their monthly electricity bill. Therein lies the problem and the direction of today.
Data Center operators became aware early of the cost of power because they pay for it twice: once to heat up the CPUs and then again to cool them down. Manufacturers of computer chips and chipsets responded to this and began reducing chip heat-production some years previously. Each new generation of CPU coming from the foundries extracts more computing speed using less power than the previous one. This is a good and green trend. It also means that care must be taken when purchasing equipment for your new data center. The bargain-priced blades may work well, but might eat up your savings in power costs in a short time. New energy-efficient units working at lower speeds can actually save money while allowing you to vastly increase your processing speed. As the chips become smaller they use less power to go just as fast. Not a bad deal.
This is not a sales message for buying new computers (disclaimer, I do own stock in some High-Tech manufacturing companies, but you do also); it is a consideration to move into the future responsibly and prudently. Also building a data center that can take advantage of cooler weather to turn off the AC is wise. Even in Phoenix, where temperatures remain above 100 f all summer, has mild winters allowing outside air to cool the equipment.
Of equal importance is the overall stability of the area in which the center is located. Over earthquake fault lines is counter-indicated, as are areas often flooded during storms. Amazingly, Calgary in Canada and Phoenix in Arizona are considered some of the more stable areas where natural events are few and power supplies are redundant. (Outside air cooling happens more in Calgary.) With the speed of electronic connectivity being what it is, cyberspace makes physical location less important. Indeed, some people choose data centers where the utility costs are lower.
How hot is your Big Data data center? Not as hot as it would have been six years ago and probably hotter than it will be six years in the future.