The new AI Readiness Index from Cisco points out serious challenges related to infrastructure capabilities, data governance and talent availability.
While 95% of businesses are aware that AI will increase infrastructure workloads, only 17% have networks that are flexible enough to handle the complex requirements of AI. Given that disconnect, it’s too early to see widespread deployment of AI at scale, despite the hype.
That’s one of the key takeaways from Cisco’s inaugural AI Readiness Index, a survey of 8,000 global companies aimed at measuring corporate interest in and ability to utilize AI technologies.
“Just like cloud kind of changed every industry that it touched, I think that AI is going to change every industry that it touches,” said Jonathan Davidson, executive vice president and general manager of Cisco’s networking business.
Interest in AI over the past 12 months has increased with the availability of large language models from OpenAI and others; LLMs have grown from millions of data points to billions, and lots more can be done with that data than ever before as models continue to grow, Davidson said.
Industry watchers see huge potential for AI technologies – IDC, for example, says enterprise spending on generative AI services, software and infrastructure will skyrocket over the next four years, jumping from $16 billion this year to $143 billion in 2027. However, the vast majority of companies aren’t ready for it. Just 14% of organizations surveyed in Cisco’s readiness index said they are fully prepared to deploy and leverage AI-powered technologies.
Network readiness for AI
On the networking front, Cisco found that most current enterprise networks are not equipped to meet AI workloads. Businesses understand that AI will increase infrastructure workloads, but only 17% have networks that are fully flexible to handle the complexity.
“23% of companies have limited or no scalability at all when it comes to meeting new AI challenges within their current IT infrastructures,” Cisco stated. “To accommodate AI’s increased power and computing demands, more than three-quarters of companies will require further data center graphics processing units (GPUs) to support current and future AI workloads. In addition, 30% say the latency and throughput of their network is not optimal or sub-optimal, and 48% agree that they need further improvements on this front to cater to future needs.”
At the heart of most AI networks will be Ethernet, since high-bandwidth Ethernet infrastructure is essential to facilitate quick data transfer between AI workloads, Cisco stated. “Implementing software controls like Priority Flow Control (PFC) and Explicit Congestion Notification (ECN) in the Ethernet network guarantees uninterrupted data delivery, especially for latency-sensitive AI workloads.”
For AI readiness, the Cisco research recommends that enterprises build in automation tools for network configuration in order to optimize data transfer between AI workloads.
“Automation reduces manual intervention, improves efficiency, and allows the infrastructure to dynamically adapt to the demands of AI workloads,” the researchers said. “The combination of these will determine whether a company is I/O rich or I/O poor, and that in turn will be the differentiator between those who succeed in fully leveraging AI, and those who don’t.”
Cisco’s AI moves
Cisco has a variety of efforts underway to help with the networking and security challenges, Davidson noted.
For example, Cisco recently unveiled its Data Center Networking Blueprint for AI/ML Applications, which defines how organizations can use existing data center Ethernet networks to support AI workloads.
A core component of the data center AI blueprint is Cisco’s Nexus 9000 data center switches, which support up to 25.6Tbps of bandwidth per ASIC and “have the hardware and software capabilities available today to provide the right latency, congestion management mechanisms, and telemetry to meet the requirements of AI/ML applications,” Cisco stated.
In addition, the vendor recently announced four new Cisco Validated Designs for AI blueprints from Red Hat, Nvidia, OpenAI, and Cloudera to focus on virtualized and containerized environments as well as converged and hyperconverged infrastructure options. Cisco already had validated AI models on its menu from AMD, Intel, Nutanix, Flashstack and Flexpod.
Cisco is building Ansible-based automation playbooks on top of these models that customers can use with Cisco’s Intersight cloud-based management and orchestration system to automatically inject their own data into the models and build out repositories that can be used in their infrastructure, including at the edge of the network and in the data center, Cisco stated.
To its Webex collaboration platform, Cisco has added an AI-based Codec, which includes generative AI capabilities which Cisco says will redefine real-time communication and solve the challenge of spotty audio quality. The idea is that the audio Codec promises to deliver crystal-clear audio regardless of network conditions, even in areas with bad connections.
AI adds security challenges
On the security side, Cisco found that 97% of companies have some protection for data used in AI models, and 68% have the ability to detect attacks on those AI models.
“Organizations are also not fully prepared to guard against the cybersecurity threats that come with AI adoption,” Cisco stated. “As higher volumes of data, including confidential and sensitive data, is processed by AI, the incentive for malicious actors to launch attacks against these systems becomes greater while the stakes for organizations get higher.”
Further, with one quarter of leaders saying their organizations have limited awareness or are unaware of security threats specific to AI workloads, more education is needed for organizations and their employees to work with AI securely, Cisco stated.
The encouraging part is that 77% of organizations are at least implementing advanced encryption or end-to-end encryption to protect the data utilized in AI models, Cisco found.
Some other interesting tidbits from Cisco’s AI readiness index include:
Machine learning has the highest rate of deployment at 35%.
However, predictive and generative AI have the highest rates of in-progress deployment at 41% and 40% respectively, Cisco found.
AI deployment in an organization increases power consumption.
Complex computations and data processing tasks inherent to AI models demand more energy from the underlying hardware, especially GPUs and data centers. Companies should think about deploying tools and technologies that can provide higher network bandwidth, better performance and scale, and consume less power, Cisco warned.
However, less than half (44%) of respondents say they are highly prepared with infrastructure dedicated to optimizing power for AI deployments. Another 55% of respondents say they are not prepared or ‘somewhat’ prepared. The adoption of technologies that help deliver more output while consuming lesser power will become a competitive differentiator as adoption of AI increases, Cisco stated.
Skills gaps and a lack of resources remain challenging.
Close to half of respondents said their organizations are moderately well resourced (47%), with an almost even split between those feeling very well resourced (29%) and those who are under resourced or unsure (24%). Those at companies with more than 1,500 employees are slightly more likely to feel under resourced, and media and communications, education and natural resources are the industries with the largest issues in this area. In addition, 37% of respondents ranked comprehension and proficiency of AI tools and technologies as their primary skill gap.
Effective data analytics tools go hand in hand with AI applications and overall data strategy.
More than two-thirds of global respondents (67%) positively rated the ability of their analytics tools to handle complex AI-related data sets. However, 74% of respondents said their analytics tools are not fully integrated with data sources and AI platforms being used, Cisco stated. In fact, 31% of respondents said their tools were not integrated (4%) or somewhat integrated (27%) at best.