What’s Unfolding For India’s Data Center Ambitions
Currently, with artificial intelligence (AI) workloads increasing, data centers require significantly higher power densities, GPU-heavy infrastructure, advanced cooling systems, and ultra-low latency connectivity. Indian players are gearing up to meet the new infrastructure demands
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
The story of India’s data centers is, at its core, the story of a nation racing to match its data generation with infrastructure requirements, bridging a gap that has long left it underpenetrated compared to global peers.
Data centres are essential infrastructure for 21st‑century economies, enabling cloud services, communications, digital commerce and the deployment of artificial intelligence (AI) tools and services.
Traditionally, many legacy data centers facilities were designed for conventional enterprise IT loads, typically requiring far lower power. Currently, with AI workloads increasing, these units require significantly higher power densities, GPU-heavy infrastructure, advanced cooling systems, and ultra-low latency connectivity. The key lies in a balanced approach.
What Happens to Traditional Data Centers
Selective retrofitting will play an important role, especially in facilities with strong structural design and scalable power architecture. These can be upgraded with liquid cooling, enhanced power distribution, high-speed networking fabrics, and AI-optimized racks. However, not every traditional data center will be economically viable for full AI conversion.
“Going forward, the ecosystem will require specialized segments. AI-first hyperscale campuses will handle large-scale model training and high-performance computing, while traditional facilities will support hybrid cloud, colocation, and edge deployments. In India, this balanced model is essential to support AI growth alongside data localization, digital public infrastructure, and enterprise transformation,” said Sunil Gupta, co-founder, CEO and MD, Yotta Data Services.
AI is significantly expanding the performance capabilities of datacenters, but it does not replace traditional setups. Therefore, the future involves coexistence rather than replacement, with operators creating portfolios that support both traditional IT and AI-focused workloads, tailored to customer needs.
Retrofitting will play a role, but only to a certain extent. Some existing facilities can be upgraded with better cooling, modular power systems, and higher-density zones to support AI inference or mid-range computing. However, large-scale AI setups often need purpose-built infrastructure from the start. “At CtrlS, the strategy has been to upgrade viable assets while also investing in new AI-ready campuses featuring high-density design and advanced cooling, allowing customers to transition smoothly as workloads change. The wider industry is adopting a similar hybrid approach, upgrading where possible and building new capacity for the future,” said Anil Nama, CIO, CtrlS Datacenters.
In recent times, enterprises are shifting from captive server rooms to colocation and hybrid models. The colocation versus captive data center capacity split is projected to move from 60:40 to 70:30 in the next five years. AI adoption is accelerating this shift, as traditional setups lack the power and cooling required for AI workloads. “Liquid cooling methods provide a more energy-efficient and sustainable solution compared to traditional air cooling. Retrofit strategies include practical cooling upgrades such as CRAC, CRAH, and HACA systems to improve performance. High-density data centers require specialized infrastructure, such as enhanced power delivery and advanced cooling systems, which significantly elevates capital expenditure during construction or retrofitting,” said CEO Sharad Agarwal, Sify Data Centers.
With Great Demand, Comes Bigger Expansion
With rapid expansion in the data center space, CtrlS’ strategy focuses on building large-scale, future-ready infrastructure while expanding regional access. A major upcoming project is the datacenter park at Chandanvelly near Hyderabad, planned as a multi-building, AI-ready campus with high-density infrastructure and dedicated power to support future workloads as demand increases. In addition to hyperscale projects, the company is growing its distributed presence through edge facilities in emerging digital markets.
“Locations like Patna and Lucknow are already active, with others such as Ahmedabad (GIFT City), Bhubaneswar, and Guwahati under development. The goal is to develop a balanced national platform that supports both large cloud deployments and fast, low-latency enterprise applications as India’s digital economy advances,” Anil Nama of CtrlS.
At present, Sify’s existing facilities are designed to support evolving workloads, including AI-driven requirements, and continue to operate with scalable, future-ready infrastructure. As of March 2025, it has designed an IT capacity of 188.04 MW across 14 operational data centres in six major cities, including Mumbai, Chennai, and Bengaluru.
“Going forward, our focus is on expansion rather than retrofitting. We are developing 11 new data centres in strategic locations to address India’s growing demand. The upcoming centres are being built to support AI-ready infrastructure and edge deployments, ensuring we stay aligned with the next phase of digital growth,” said Sharad Agarwal of Sify.
With India’s ambition to be a global datacenter hub, Yotta is scaling aggressively to meet both domestic and global AI demand. At the India ImpactAI Summit 2026, it signalled a major expansion of GPU capacity, building on the nearly 10,000 GPUs already deployed to date.
“We will be investing over $2 billion to deploy 20,736 liquid-cooled NVIDIA Blackwell Ultra GPUs, creating one of Asia’s largest AI computing superclusters at our D2 data center in Noida, which is expected to be operational by August 2026. Under a multi-year engagement with NVIDIA, we’re also hosting one of the largest DGX Cloud clusters in the Asia-Pacific region, reflecting strong global demand for compute anchored in India,” said Gupta, CEO, Yotta.
“Beyond this, we have outlined plans to invest an additional $4 billion to deploy over 40,000 more GPUs in phases. A portion of these GPUs will be used by the government as part of its India AI Mission, while the rest may be used by global model builders and hyperscalers who are keen to double down their presence in the country,” explained Gupta.
Additionally, Yotta has proactively invested in robust AI-ready infrastructure to support this scale of growth. Its Navi Mumbai campus is designed to scale up to 2 GW, while the Greater Noida facility can expand up to 250 MW, giving the headroom to rapidly deploy capacity as demand rises. This ensures the ability to host over a million GPUs and seamlessly accommodate both domestic and global AI workloads.
The aim is to scale high-performance compute, deepen partnerships, and democratise access to high performance GPUs to position India as a trusted, sovereign, global AI infrastructure hub.
Infrastructure CAPEX and Contingency Planning
Memory and high-bandwidth components are a pressure point for the AI infrastructure industry right now. For enterprises building AI data centers, this can influence both capex and contingency planning, because delays or price volatility in critical parts can shift deployment schedules and total project cost.
Organizations might need to account for price volatility, longer hardware lead times, and evolving technology roadmaps during budgeting. This could result in higher safety margins and more conservative financial approaches in the short term. At the same time, it promotes prioritizing workloads, enhancing utilization efficiency, and planning for lifecycle management to ensure that capital is invested where it yields the most value.
“From an infrastructure provider’s perspective, proactive planning is crucial compared to reactive approaches. Planning strategies focus on multi-vendor ecosystems, forecasting future capacity needs, and designing facilities that support various hardware generations with minimal rework. This approach minimizes the risk of component disruptions and offers customers greater flexibility as technology advances. The industry recognizes that supply-chain uncertainties are a lasting challenge rather than a temporary issue, making resilience vital for both operations and financial stability,” said Nama.
Keeping cognizance of the global scenario, Yotta has a strategy in place. First, it designs and scales using validated NVIDIA reference architectures, which standardize configurations and reduce integration risk, so expansion is repeatable and faster when components arrive. Second, phased capacity deployment with longer-term procurement planning, therefore, capacity comes online in predictable tranches instead of hinging on a single large procurement cycle. Third, it invests ahead in scalable power, cooling, and rack-ready infrastructure, so compute and memory can be commissioned quickly without reworking core facility design.
“This structured approach is also why large-scale sovereign AI infrastructure becomes strategically important. From an Indian enterprise perspective, this is precisely why sovereign AI cloud infrastructure matters. Instead of each enterprise attempting to navigate chip shortages independently, large scale providers like us aggregate demand, manage procurement risk, and absorb supply side complexity. While global shortages may create temporary pressure, our capital planning, partner alignment, and expansion strategy ensure continuity,” Gupta explained.
AI infrastructure depends on specialized GPUs and accelerators, shaping power density and design needs. “Our methodology emphasizes modular and zonal design strategies, facilitating phased expansion and effective segregation of workloads.This allows for scalable, cost-effective management of changing AI hardware supplies. Our edge data centers enable real-time processing and can quickly adapt to AI and IoT demands, supporting the evolving needs of edge computing,” added the CEO of Sify.
Asia Pacific’s data centre opportunity is significant. Addressing the equally real challenges will set the sector on sustainable long-term foundations, to unlock its potential and drive digital prosperity.
The story of India’s data centers is, at its core, the story of a nation racing to match its data generation with infrastructure requirements, bridging a gap that has long left it underpenetrated compared to global peers.
Data centres are essential infrastructure for 21st‑century economies, enabling cloud services, communications, digital commerce and the deployment of artificial intelligence (AI) tools and services.
Traditionally, many legacy data centers facilities were designed for conventional enterprise IT loads, typically requiring far lower power. Currently, with AI workloads increasing, these units require significantly higher power densities, GPU-heavy infrastructure, advanced cooling systems, and ultra-low latency connectivity. The key lies in a balanced approach.
What Happens to Traditional Data Centers