Data centres are evolving from generic IT buildings into highly specialised facilities to manage AI workloads
Liquid cooling suits high-density AI racks, where power demands outpace air-cooling limits, says Vijay Sampathkumar, senior business Leader
Vijay Sampathkumar, Senior business Leader

AI workloads are changing the basic design of data centres in India. “Earlier, most data centres were built to support email, business applications, and websites. These workloads mainly used CPUs and required limited power and cooling. Typical rack densities were low, and air cooling was sufficient. AI systems are very different.
They rely on powerful GPUs that consume much more electricity and generate far more heat. A single AI rack today can consume several times the power of a traditional rack. This forces data centres to redesign how power is delivered, how heat is managed, and how space is planned,” says Vijay Sampathkumar, a senior business leader in an exclusive interaction with Bizz Buzz.
How are AI and GPU-intensive workloads redefining data-centre architecture in India?
AI workloads are changing the basic design of data centres in India. Earlier, most data centres were built to support email, business applications, and websites. These workloads mainly used CPUs and required limited power and cooling. Typical rack densities were low, and air cooling was sufficient.
AI systems are very different. They rely on powerful GPUs that consume much more electricity and generate far more heat. A single AI rack today can consume several times the power of a traditional rack. This forces data centres to redesign how power is delivered, how heat is managed, and how space is planned.
Power infrastructure now needs to be stronger and closer to the equipment. Cooling is no longer a background system, it has become a central part of data-centre design. Floor layouts, ceiling heights, and structural strength are being adjusted to support heavier and denser equipment.
In India, this shift is happening alongside rapid growth in AI adoption across banking, telecom, healthcare, manufacturing, and government services. Data localisation rules and the push for digital public infrastructure are also driving demand for local AI compute. As a result, data centres are evolving from generic IT buildings into highly specialised facilities built specifically for AI workloads.
As data centres move from 10–15 MW to 50–100 MW AI facilities, what are the biggest design and operational challenges you see?
Scaling from a 10–15 MW data centre to a 50–100 MW AI facility is a major leap. The first and biggest challenge is power availability. Securing such large amounts of reliable electricity requires close coordination with utilities, long approval timelines, and strong backup systems.
The second major challenge is heat management. AI equipment produces intense heat in a small area. At large scales, traditional air cooling becomes inefficient, expensive, and difficult to manage. Cooling systems must be planned carefully from the beginning, not added later.
Operations also become more complex. Large AI data centres need advanced monitoring, predictive maintenance, and well-trained teams. Any failure can impact thousands of GPUs, making reliability critical.
Another key challenge is sustainability. Large AI facilities attract attention for their energy use, carbon footprint, and water consumption. Operators must balance fast growth with environmental responsibility, regulatory compliance, and long-term operating costs.
Overall, these large facilities behave more like industrial plants than traditional IT sites, requiring a new mindset in both design and operations.
What should enterprises prioritise today when planning AI-ready infrastructure for the next 5–10 years?
When planning AI-ready infrastructure, enterprises need to think beyond immediate workloads and short-term cost optimisation.
AI computing is evolving at an unprecedented pace, with newer GPUs and accelerators demanding significantly higher power and generating far more heat than traditional IT equipment. Infrastructure decisions made today must therefore support not just current needs, but the much higher densities and energy demands expected over the next decade.
The first and most critical priority is future-proofing power and cooling capacity. Even if AI deployments are limited today, data centres should be designed to accommodate substantially higher rack densities in the future.
Retrofitting power and cooling systems later is expensive, disruptive, and often inefficient. Planning early for higher power availability and liquid-cooling readiness allows enterprises to scale smoothly as AI adoption accelerates.
Second, flexibility and modularity should be built into the infrastructure from day one. Modular power, cooling, and IT systems allow enterprises to expand AI capacity in phases, aligning capital expenditure with actual demand.
This approach reduces risk, shortens deployment timelines, and provides the agility required in a rapidly changing AI landscape. Modular designs also make it easier to adopt new technologies as they emerge.
Third, sustainability must be a core design principle, not an afterthought. Energy efficiency, renewable power integration, and reduced environmental impact will become increasingly important due to regulatory pressure, stakeholder expectations, and rising energy costs.
AI workloads can significantly increase a data centre’s carbon footprint if not managed properly. Designing energy-efficient infrastructure today helps control long-term operating costs while supporting corporate sustainability goals.
Finally, enterprises should recognise that they do not have to build everything themselves. Partnering with experienced data-centre operators, cooling specialists, and technology providers reduces execution risk and ensures access to proven solutions and best practices. These partnerships enable enterprises to stay aligned with global innovations while focusing on their core business objectives.
Why is liquid cooling becoming essential for high-density AI data centres, and where does it outperform traditional air cooling?
Traditional air cooling is approaching its physical and practical limits. As AI GPUs become more powerful, the heat they generate often exceeds what air can efficiently remove. Moving massive volumes of air requires large fans, significant floor space, and substantial energy consumption—yet still may not provide consistent thermal performance at very high densities.
Liquid cooling addresses these limitations by removing heat directly at the source. Liquids have a much higher heat-carrying capacity than air, allowing them to absorb and transfer heat far more efficiently. This makes liquid cooling particularly well-suited for high-density AI racks, where power levels can exceed what air cooling can reliably support.
Compared to air cooling, liquid cooling enables much higher rack densities without compromising performance or reliability. It delivers more stable temperature control, which reduces thermal stress on components and extends hardware lifespan. At the same time, it significantly lowers the energy required for cooling, improving overall efficiency.
Importantly, liquid cooling is no longer experimental. It is already widely deployed in hyperscale and AI-focused data centres globally. In India, as enterprises and cloud providers deploy larger and denser AI clusters, liquid cooling is rapidly shifting from a niche option to a practical and increasingly essential solution.
How does liquid cooling improve PUE, energy consumption, space utilisation, and sustainability metrics in real-world deployments?
Liquid cooling delivers clear, measurable benefits across efficiency and sustainability metrics. By reducing dependence on large air-handling units, chillers, and high-powered fans, it significantly lowers cooling energy consumption, directly improving Power Usage Effectiveness (PUE).
This reduction translates into lower operating costs, especially in high-density AI environments where cooling can consume a substantial share of total power. Liquid cooling also enables higher rack densities, allowing more compute capacity within the same footprint and reducing the need for larger facilities.
From a sustainability standpoint, it supports higher operating temperatures, enables waste-heat reuse, and reduces reliance on water-intensive cooling, an important advantage in water-stressed regions like India.

