Explainable AI might overcome the distrust that enterprise network engineers have for AI/ML management tools that have the potential to streamline network operations. IT organizations that apply artificial intelligence and machine learning (AI/ML) technology to network management are finding that AI/ML can make mistakes, but most organizations believe that AI-driven network management will improve their network operations. To realize these benefits, network managers must find a way to trust these AI solutions despite their foibles. Explainable AI tools could hold the key. A survey finds network engineers are skeptical. In an Enterprise Management Associates (EMA) survey of 250 IT professionals who use AI/ML technology for network management, 96% said those solutions have produced false or mistaken insights and recommendations. Nearly 65% described these mistakes as somewhat to very rare, according to the recent EMA report “AI-Driven Networks: Leveling Up Network Management.” Overall, 44% percent of respondents said they have strong trust in their AI-driven network-management tools, and another 42% slightly trust these tools. But members of network-engineering teams reported more skepticism than other groups—IT tool engineers, cloud engineers, or members of CIO suites—suggesting that people with the deepest networking expertise were the least convinced. In fact, 20% of respondents said that cultural resistance and distrust from the network team was one of the biggest roadblocks to successful use of AI-driven networking. Respondents who work within a network engineering team were twice as likely (40%) to cite this challenge. Given the prevalence of errors and the lukewarm acceptance from high-level networking experts, how are organizations building trust in these solutions? What is explainable AI, and how can it help? Explainable AI is an academic concept embraced by a growing number of providers of commercial AI solutions. It’s a subdiscipline of AI research that emphasizes the development of tools that spell out how AI/ML technology makes decisions and discovers insights. Researchers argue that explainable AI tools pave the way for human acceptance of AI technology. It can also address concerns about ethics and compliance. EMA’s research validated this notion. More than 50% of research participants said explainable AI tools are very important to building trust in AI/ML technology they apply to network management. Another 41% said it was somewhat important. Majorities of participants pointed to three explainable AI tools and techniques that best help with building trust: Visualizations of how insights were discovered (72%): Some vendors embed visual elements that guide humans through the paths AI/ML algorithms take to develop insights. These include decisions trees, branching visual elements that display how the technology works with and interprets network data. Natural language explanations (66%): These explanations can be static phrases pinned to outputs from an AI/ML tool and can also come in the form of a chatbot or virtual assistant that provides a conversational interface. Users with varying levels of technical expertise can understand these explanations. Probability scores (57%): Some AI/ML solutions present insights without context about how confident they are in their own conclusions. A probability score takes a different tack, pairing each insight or recommendation with a score that tells how confident the system is in its output. This helps the user determine whether to act on the information, take a wait-and-see approach, or ignore it altogether. Respondents who reported the most overall success with AI-driven networking solutions were more likely to see value in all three of these capabilities. There may be other ways to build trust in AI-driven networking, but explainable AI may be one of the most effective and efficient. It offers some transparency into the AI/ML systems that might otherwise be opaque. When evaluating AI-driven networking, IT buyers should ask vendors about how they help operators develop trust in these systems with explainable AI. Related content how-to Compressing files using the zip command on Linux The zip command lets you compress files to preserve them or back them up, and you can require a password to extract the contents of a zip file. By Sandra Henry-Stocker May 13, 2024 4 mins Linux news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center opinion NSA, FBI warn of email spoofing threat Email spoofing is acknowledged by experts as a very credible threat. By Sandra Henry-Stocker May 13, 2024 3 mins Linux how-to Download our SASE and SSE enterprise buyer’s guide From the editors of Network World, this enterprise buyer’s guide helps network and security IT staff understand what Secure Access Service Edge (SASE) and Secure Service Edge) SSE can do for their organizations and how to choose the right solut By Neal Weinberg May 13, 2024 1 min SASE Remote Access Security Network Security PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe