Click to learn more about author Alan Chang.
As basic IT infrastructure shifts from on-premises private IT data centers to public/hybrid clouds with growing demands for more computing performance, the world is facing a new challenge: Cloud data centers now consume as much power as entire cities, while generating carbon emissions that are destabilizing the planet’s ecosystems.
In 2020, data centers in the United States were estimated to consume over 73 billion kilowatt hours of electricity – more than what was needed to power Los Angeles, California (66 billion kWh) in 2019 – while generating millions of tons of carbon dioxide, sulfur dioxide, dioxide, and smoke. Server power consumption is a global problem that gets worse every year; data centers located in China used over 160 billion kilowatt hours of electricity, exceeding demand from the city of Shanghai (157 billion kWh), and their energy usage is compounding annually at a rate exceeding 10.6%.
Some data center operators claim to be offsetting their heavy reliance on fossil fuels with increasing use of green energy, but the reality is that their fossil fuel numbers also keep going up, leading to more carbon emissions. Local cloud servers already account for 2.35% of China’s total electricity consumption, and 14.9% of the service sector’s consumption of electricity – numbers that won’t drop if data centers continue increasing in numbers and power draw.
Open computing offers a solution to this growing problem. By sharing and standardizing everything from product designs and specifications to intellectual property rights, major players in the computing and networking industries are promoting interoperability and integration on a previously unthinkable scale.
Historically, open computing was limited to complete rack servers, but over the past decade has expanded to a wider array of servers, as well as individual components: storage, networking parts, heat dissipation technologies, infrastructure management hardware, and even power supplies – basically all aspects of the modern data center. Rather than buying complete systems off the shelf, open computing customers now purchase the modular elements they need to build solutions, avoiding unnecessary overlaps in hardware, physical space, and costs. Why buy and power 10 separate computers when you can buy one chassis that holds the same amount of storage, processing power, and networking capacity in a less wasteful total volume?
Telecom operators seeking greater efficiencies have been global pioneers for the open computing industry, combining modular servers with centralized power supplies and heat management hardware to achieve economies of scale. One provider has improved space utilization by as much as 90%, dropped power consumption by up to 20%, and increased return on investment by 33%, all while embracing a more minimalist design ethos with incredible flexibility for different applications.
Today, there’s a complete industrial ecosystem supporting open computing in data centers, enabling original equipment (OEMs) and design manufacturers (ODMs) to develop and commercialize products following common data center frameworks. Leading chip, component, and computer makers are all on board, as are internet peripheral makers and overall data center solution providers.
The list of open computing backers is significant and growing: OCP, the originator of open computing, now counts Microsoft, Google, Intel, ARM, NVIDIA, HPE, Facebook, Alibaba, Baidu, Tencent, Inspur, and Quanta among its members. Sponsoring nine categories and 23 project groups, OCP promotes open computing across everything from server, storage, and networking solutions to components such as racks and power supplies. Given the number and nature of participants, it’s clear that open computing will be a major future contributor to greater IT infrastructure innovation and development.
That said, there’s no guarantee of success in combatting growing power consumption and carbon emissions. According to a recent Open Computing White Paper from global scientific research institution Omdia and sponsored by my company, only 7% of the servers in the world were based on open standards as of 2016 – a number that is expected to reach 36% by the end of 2021. However, current projections suggest a slow climb to 40% in 2025.
To beat that number, a greater number of data centers must embrace open computing, and financial benefits might be the clearest way to drive change. Research shows that the use of open computing equipment can cut capital and operating expenses by 30%, enough to make a significant impact on any cloud-heavy organization’s bottom line.
Facebook, for instance, found that an open computing server design saved 45% of capital expenditures and reduced 24% of operating costs, while also improving energy efficiency by 38%. Baidu was able to lower the total cost of operating rack servers by 10% with Scorpio while improving the power usage effectiveness of its latest data center to 1.2 – the closer the number is to 1.0, the better. That’s “very efficient” by data center standards, superior to Baidu’s overall score of 1.3, and well below the industry average of 1.8 to 2.0.
Tech giants have a disproportionate impact on data center trends, but they aren’t the only organizations that can drive change. While open computing originated in the largest-scale data centers, it’s now moving into small- and medium-sized data centers, as well as industries that aren’t fully internet-based. Medical care, automobile manufacturing, finance, gaming, and e-commerce companies are trying to deploy IT infrastructures that comply with open computing standards. Omdia expects that non-internet companies will represent 21.9% of the market for open computing in 2025, up from 10.5% in 2020.
Some of that change is being driven by the increased demand for artificial intelligence (AI) and edge computing across many major industries – as well as open computing’s ability to spur AI innovation. Discrete, proprietary AI hardware, software, and tool solutions have limited AI’s adoption in data centers. By standardizing the AI solutions available to organizations, including AI coprocessors and acceleration modules, OCP is easing the process of designing AI infrastructure, while enabling large-scale adoption of complete AI solutions.
Similarly, OCP and the Open Data Center Committee have assigned project teams to advance edge computing across multiple industries, notably including telecommunications deployments of Open Edge servers in North America and Europe. Trials based on ODCC’s OTII standard are currently underway with other telecom operators, with the expectation that continued optimizations will enable capital expenditure savings of 30%, and operating expenditure reductions of 53%.
After 10 years of active development, the open computing movement is already yielding financial and ecological benefits, offering a clear path towards a greener future, and my company has followed that path with cooler and more power-efficient servers. Now that large data centers and major users such as telecom providers have led the way, the next step is for a wider array of small- and medium-sized data centers to embrace open computing and hasten the process of replacing old, inefficient technology with the servers that will power a smarter, cleaner future. The open computing ecosystem will continue to grow, but in order for it to truly flourish, adoption will need to expand to more organizations and countries – sooner, rather than later.