Is a lack of standards holding immersion cooling back? • The Register

2022-07-10 00:52:37 By : Ms. May Xie

Comment Liquid and immersion cooling have undergone something of a renaissance in the datacenter in recent years as components have grown ever hotter.

This trend has only accelerated over the past few months as we’ve seen a fervor of innovation and development around everything from liquid-cooled servers and components for vendors that believe the only way to cool these systems long term is to drench them in a vat of refrigerants.

Liquid and immersion cooling are by no means new technologies. They’ve had a storied history in the high-performance computing space, in systems like HPE’s Apollo, Cray, and Lenovo’s Neptune to name just a handful.

A major factor driving the adoption of this tech in traditional datacenters is a combination of more powerful chips and a general desire to cut operating costs by curbing energy consumption.

One of the challenges, however, is many of these systems employ radically different form factors than are typical in air-cooled datacenters. Some systems only require modest changes to the existing rack infrastructure, while others ditch that convention entirely in favor of massive tubs into which servers are vertically slotted.

The ways these technologies are being implemented is a mixed bag to say the least.

This challenge was on full display this week at HPE Discover, where the IT goliath announced a collaboration with Intel and Iceotope to bring immersion-cooling tech to HPE’s enterprise-focused Proliant server line.

The systems can now be provisioned with Iceotope's Ku:l immersion and liquid-cooling technology, via HPE’s channel partners with support provided by distributor Avnet Integrated. Iceotope's designs meld elements of immersion cooling and closed-loop liquid cooling to enable this technology to be deployed in rack environments with minimal changes to the existing infrastructure.

Ice's chassis-level immersion-cooling platform effectively uses the server’s case as a reservoir and then pumps coolant throughout to hotspots like the CPU, GPU, or memory. The company also offers a 3U conversion kit for adapting air-cooled servers to liquid cooling.

Both designs utilize a liquid-to-liquid heat exchanger toward the back of the chassis, where deionized water is pumped in and heat is removed from the system using an external dry cooler.

This is a stark departure from the approach used by rival immersion-cooling vendors, such as LiquidStack or Submer, which favor submerging multiple systems in a tub full of coolant — commonly a two-phase refrigerant or specialized oil.

While this approach has shown promise, and has even been deployed in Microsoft’s Azure datacenters, the unique form factors may require special consideration from building operators. Weight distribution is among operators’ primary concerns, Dell’Oro analyst Lucas Beran told The Register in an earlier interview.

The lack of a standardized form factor for deploying and implementing these technologies is one of several challenges Intel hopes to address with its $700 million Oregon liquid and immersion cooling lab.

Announced in late May, the 200,000-square-foot facility, located about 20 miles west of Portland at its Hillsboro campus in the US, will qualify, test, and demo its expansive datacenter portfolio using a variety of cooling tech. The chipmaker is also said to be working on an open reference design for an immersion-cooling system that’s being developed by Intel Taiwan.

Intel plans to bring other Taiwanese manufacturers into the fold before rolling out the reference design globally. Whether the x86 giant will be able to bring any consistency to the way immersion cooling will be deployed in datacenters going forward remains to be seen, however.

Even if Intel’s reference design never pans out, there are still other initiatives pursuing similar goals, including the Open Compute Project’s advanced cooling solutions sub project, launched in 2018.

It aims to establish an ecosystem of servers, storage, and networking gear built around common standards for direct contact, immersion, and other cooling tech.

In the meantime, the industry will carry on chilling the best ways it can. ®

Arm has a champion in the shape of HPE, which has added a server powered by the British chip designer's CPU cores to its ProLiant portfolio, aimed at cloud-native workloads for service providers and enterprise customers alike.

Announced at the IT titan's Discover 2022 conference in Las Vegas, the HPE ProLiant RL300 Gen11 server is the first in a series of such systems powered by Ampere's Altra and Altra Max processors, which feature up to 80 and 128 Arm-designed Neoverse cores, respectively.

The system is set to be available during Q3 2022, so sometime in the next three months, and is basically an enterprise-grade ProLiant server – but with an Arm processor at its core instead of the more usual Intel Xeon or AMD Epyc X86 chips.

Server maker Inspur is going all-in on liquid cooling, making cold plate cooling technology available across its portfolio and working with third parties to assemble full-lifecycle solutions.

Inspur, which is a big supplier to cloud providers, said the move is another step towards becoming carbon neutral. It will offer cold plate liquid-cooling tech for all of its products, including general-purpose servers, high-density servers, rack servers, and the systems it labels as AI servers.

Cold plate cooling technology sees a liquid coolant circulated through heatsinks attached to components such as the CPU that generate a lot of heat. The heat is typically transferred by the coolant to a heat exchanger from where it can be dissipated, or is transferred to an external coolant circuit.

Memory maker Micron has announced availability of DDR5 server DRAM components in preparation for server and workstation platforms from Intel and AMD that are due to support the faster memory standard.

Micron said its DDR5 server memory parts are now available through commercial and industrial channel partners in support of qualification for next-generation server and workstation systems based on Intel and AMD CPUs.

In other words, the memory chips are here, but the servers are not yet ready for them. Intel's Sapphire Rapids Xeon Scalable processor family will support the new memory standard, but Intel has repeatedly delayed this platform and volume production is not expected until later this year. AMD's Genoa, the first of its fourth-gen of Epyc server chips, is also expected to arrive in the fourth quarter of this year with support for DDR5.

China-based server maker Inspur has joined the Arm server ecosystem, unveiling a rackmount system using Arm-based chips.

It said it has achieved Arm SystemReady SR certification, a compliance scheme run by the chip designer and based on a set of hardware and firmware standards that are designed to give buyers confidence that operating systems and applications will work on Arm-based systems.

Inspur may not be a familiar name to many, but the company is a big supplier to the hyperscale and cloud companies, and was listed by IDC as the third largest server vendor in the world by market share as recently as last year.

Extending a public-cloud-like experience to on-prem datacenters has long been a promise of HPE's GreenLake anything-as-a-service (XaaS) platform. At HPE Discover this week, the company made good on that promise with the launch of GreenLake for Private Cloud.

The platform enables customers "to have a cloud in their premises wherever the data is, whether it's at the edge, it's at a colo datacenter, or is at any other location," Vishal Lall, SVP and GM for HPE GreenLake cloud services solutions, said during a press briefing ahead of Discovery.

Most private clouds up to this point have been custom-built environments strapped together with some automation, he said. "It was somewhat of an improvement over the DIY infrastructure, but it really wasn't private cloud."

Analysis Jim Chanos, the infamous short-seller who predicted Enron's downfall, has said he plans to short datacenter real-estate investment trusts (REIT).

"This is our big short right now," Chanos told the Financial Times. "The story is that, although the cloud is growing, the cloud is their enemy, not their business. Value is accrued to the cloud companies, not the bricks-and-mortar legacy datacenters."

However, Chanos's premise that these datacenter REITs are overvalued and at risk of being eaten alive by their biggest customers appears to overlook several important factors. For one, we're coming out of a pandemic-fueled supply chain crisis in which customers were willing to pay just about anything to get the gear they needed, even if it meant waiting six months to a year to get it.

Microsoft is to deploy its "grid-interactive UPS technology" at the company's datacenter in Dublin, Ireland, later this year to demonstrate how such technology may be used to help decarbonize power grids.

The Redmond software giant disclosed last month how it and power management specialist Eaton were jointly working on technology that would allow the energy storage systems used for backup power in datacenters to also help smooth out any variability in the power grid due to the unpredictability of renewable energy sources.

Now Microsoft is moving to implement this, saying that its datacenter in Dublin will be a part of the solution to this problem later this year.

Analysis Lenovo fancies its TruScale anything-as-a-service (XaaS) platform as a more flexible competitor to HPE GreenLake or Dell Apex. Unlike its rivals, Lenovo doesn't believe it needs to mimic all aspects of the cloud to be successful.

While subscription services are nothing new for Lenovo, the company only recently consolidated its offerings into a unified XaaS service called TruScale.

On the surface TruScale ticks most of the XaaS boxes — cloud-like consumption model, subscription pricing — and it works just like you'd expect. Sign up for a certain amount of compute capacity and a short time later a rack full of pre-plumbed compute, storage, and network boxes are delivered to your place of choosing, whether that's a private datacenter, colo, or edge location.

The datacenter is dead – at least according to FedEx, which announced plans to close its server farms and transition completely to the cloud, where it hopes to save an estimated $400 million annually.

At FedEx's investor relations day held last week, CIO Rob Carter said FedEx had long been a leader in technology, claiming the company was first to introduce tracking, handheld computers and automated package sorting. The next big movement in tech, Carter went on to say, is migrating all of its systems to the cloud.

"We've been working across this decade to simplify and streamline our technology and systems to create value all along the way by improving productivity, security and reliability," Carter said on the call.

The world's server market will grow in 2022 – but more slowly than in the past – and could dip further, according to analyst firm TrendForce.

Supply chain issues are, unsurprisingly, one reason for predicted modest growth. Shanghai's COVID lockdowns, for example, mean China's server makers have struggled to open, and get the parts they need.

The likes of Dell and HPE were hurt by those lockdowns, but TrendForce feels they'll recover.

The Register - Independent news and views for the tech community. Part of Situation Publishing

Biting the hand that feeds IT © 1998–2022