michael_cooney
Senior Editor

Telemetry steps into the enterprise-networking spotlight

News Analysis
18 Jan 20234 mins
Application Performance ManagementCisco SystemsCloud Management

With the ability to analyze ever-expanding telemetry data generated by network infrastructure and applications, open-source projects and AI/ML are helping to create predictive networks.

Expect to hear a lot about telemetry this year as its use gains steam in open-source projects and vendors’ observability software.

While telemetry has been used for monitoring network and application activity for years, historically it has been siloed in specific use cases, but with the advent of open-source application development along with ML- and AI-based systems, its use is expected expand significantly.

“Traditional monitoring and/or siloed visibility does not work in a world driven by hybrid or cloud-native deployments, as application components have become smaller, more distributed, and shorter-lived,” said Carlos Pereira, Cisco Fellow and chief architect in its Strategy, Incubation & Applications group. ““Now it’s more about using that telemetry data to watch over multiple operational domains so you can track experience in real time.”

Telemetry has evolved over time to gather metrics, events, logs, and traces where more data can be aggregated, for example, from all the CPUs on an AWS instance, Pereira said. This data can be gathered from routers, switches, servers, services, containers, storage, and applications that make up the enterprise.

OpenTelemetry

Open source, in particular the OpenTelemetry project, is helping drive telemetry use. Under the auspices of the Cloud Native Foundation, OpenTelemetry technology is being developed by contributors from AWS, Azure, Cisco, F5, Google Cloud, and VMware among others.  

The group defines OpenTelemetry as a collection of tools, APIs, and SDKs used to instrument, generate, collect, and export telemetry data such to help analyze software performance and behavior. 

OpenTelemetry is behind Cisco’s cloud-native application-monitoring service, AppDynamics Cloud, and the company plans to use it in its Full Stack Observability architecture, which it says will offer applications and services that correlate telemetry across multiple domains. 

“In the year ahead, there will be a significant shift toward the open-source ability to grab information from multiple domains that were previously siloed, and then develop modern applications that rely on distributed tracing embedded in the actual experience,” wrote Liz Centoni, Cisco’s chief strategy officer and general manager, applications, in her “Tech Trends and Predictions That Will Shape 2023” blog. Distributed tracing is a telemetry metric to measure application activities.

“OpenTelemetry will become the de facto standard behind how IT teams consume data to enable observability over the IT stack from the network and infrastructure to applications and the internet,” Centoni stated.

IETF network telemetry framework

OpenTelemetry isn’t the only project looking to make telemetry data more useful to the masses. The Internet Engineering Task Force has developed an informational RFC 9232 that defines a network telemetry framework from which organizations can gain network insight and facilitate efficient and automated network management.

“Network visibility is the ability of management tools to see the state and behavior of a network, which is essential for successful network operation,” the IETF RFC states.   

“Network telemetry revolves around network data that can help provide insights about the current state of the network, including network devices, forwarding, control, and management planes; can be generated and obtained through a variety of techniques, including but not limited to network instrumentation and measurements; and can be processed for purposes ranging from service assurance to network security using a wide variety of data analytical techniques,” the RFC states.

The growing use of analytics is also driving increased telemetry use.

“In the year ahead, the network will become more experience-centric with increasing capabilities to predict end-user experience issues and provide problem-solving options. Companies will increasingly access predictive technologies in integrated, easy-to-use SaaS offers. This represents an important step toward a future where connectivity will be powered by self-healing networks that can learn, predict, and plan,” Centoni stated.  “Predictive networks will be powered by the same predictive analytics that are gathered from myriad telemetry sources.”

In Cisco’s case, its predictive analytics engine predicts network issues to prevent problems before they happen. For Juniper Networks, its Mist AI gathers telemetry data from wired and wireless infrastructure to manage those environments and to predict problems. Juniper’s Apstra software uses telemetry in a real-time repository of configuration and validation information to ensure the network is doing what IT teams want it to do.

Telemetry will play an essential role in the development of AI-based applications that manage networks and applications. “It would be difficult, if not mathematically impossible, for organizations to manually correlate telemetry from network, security, storage, and applications data sets,” Cisco’s Pereira said. “Ultimately, we want a single data space for that information where we can then apply machine-learning models and AI engines on top of that to create services such as rapid anomaly detection or root cause analysis across the enterprise.”

Exit mobile version