An increased number of AI-centric data centres is the outcome of growing consumer-generated data and the rising demand for AI-based services. The evolution of machine learning and artificial intelligence technologies has generated the need for cognitive computing across various verticals, such as enterprise, industrial and consumer. Increasing adoption of cloud machine learning platforms, escalating demand for AI hardware in high-performance computing data centres, rising focus on parallel computing in AI data centres, and growing volume of data generated in various industries, are the major factors contributing to the growth of the AI infrastructure market.
According to Sachin Garg, associate vice president, semiconductor and electronics at MarketsandMarkets, “rapid developments in analytics and higher adoption of advanced technologies in the security and surveillance applications, such as mobile surveillance and smart home, have resulted in the development of high-performance, AI-optimised processors, such as GPUs and ASICs, with higher memory bandwidth and computational capability than that of traditional processors, i.e., central processing units (CPUs). The global AI infrastructure market is projected to grow from USD 14.6 billion in 2019 to USD 50.6 billion by 2025, ascending at a CAGR of 23.1 per cent.”
GPUs helping the AI model
Machine learning and deep learning models are designed to develop the ability to understand a dataset and act on new data. The process of training an AI model involves providing the learning algorithm with training data to learn from. “Training is very expensive and is best accelerated with GPUs. While using even a small dataset, the time taken to go through all of the training samples can be reduced when using GPU compared to a CPU”, said Sachin. Once the AI model is trained, it is deployed on a device for inference, to classify, recognise and process new inputs. Inference analyses real-world data and comes up with a prediction. The inference is computationally less intense than training. Unlike training, it doesn’t include a backward pass to compute the error and update weights. It’s usually a production phase, where the model is deployed to predict the real-world data.
The increase of demand for cloud deployment
With businesses becoming increasingly dynamic and products distributed widely, the technology that powers data centres needs to be deployed according to the convenience and requirement by
end-users. Large organisations predominantly use cloud or hybrid types of deployment, whereas government organisations and industries such as healthcare, pharmaceuticals, aerospace and defense go for an on-premises kind of deployment. AI solution providers are focusing on the development of robust cloud-based solutions for their clients, as many organisations have migrated from on-premises to either private or public cloud. The dominance of cloud technology and the rising support and maintenance cost of on-premises solutions is likely to boost the adoption of hosted solutions. Moreover, the cloud provides additional flexibility for business operations and real-time data accessibility to companies. The cloud platform offers improved predictive capability, as this type of deployment model enables faster alarm notification in critical situations.
Organisations can build their infrastructure to support AI platforms and solutions, or it can be purchased from cloud service providers who offer infrastructure as a service to many other organisations from a shared infrastructure. Cloud services are hosted in a data centre that can be accessed by organisations looking for efficiency and economies of scale. “The number of data centre providers and cloud companies is likely to increase, owing to the high efficiency and economies of scale offered by cloud computing. Cloud service providers offer services to several customers from a shared infrastructure (i.e., equipment for operations, networking, data storage and hardware) and help companies to save their IT infrastructure cost”, said Anand Shanker, senior analyst, semiconductor and electronics at MarketsandMarkets.
NVIDIA, followed by Intel, are two companies building on AI infrastructure for processors. Currently, AI relies on GPU acceleration for training and inference. However, there has been an influx of startups coming up with AI processors that could be more efficient for
real-time AI processing than GPU and CPU platforms. “Startups are developing high performance processors, in terms of throughput and power efficiency suitable for demanding AI applications in the industry, including private and cloud data centres”, said Sachin.