AI and machine-learning techniques are imperative in a zero-trust environment that depends on analysis of the behavior of every device, person, or system using the network. Credit: Andreus / Getty Images In a zero-trust environment, trust is not static. Behavior has to be visible for trust to persist. One of the most important differences between old thinking on networking and the zero-trust mindset is the inversion of thinking on trust. Pre-ZT, the assumption was this: Once you get on the network, you are assumed to be allowed to use it any way you want until something extraordinary happens that forces IT to shut you down and remove your access. You are assumed broadly trustworthy, and confirming that status positively is very rare. It is also very rare to have that status revoked. Post-ZT, the assumption is flipped: Use of the network is entirely contingent on good behavior, and you are strictly limited as to what you can communicate with, and how. You can only do what the organization allows in advance, and any significant misbehavior will automatically result in you being pushed off the network. The “automatically” part is important. A ZT architecture includes as an integral component a closed loop between ongoing behavior on the network and ongoing permission to use it (as manifest in the trust map that drives the environment’s policy engine). That is, ZT by definition requires that there be feedback, automated and preferably real-time, from observable network behavior to enforced network permissions. Spotting ‘significant misbehavior’ requires deep visibility So, a robust zero trust implementation requires seeing data on how every entity on the network is using (or trying to use) the network. This translates to logging information from network infrastructure at every level, from the core switches all the way “out” to the edge switches in the branch networks and all the way “in” to the virtual switches in the data center. Of course, it’s not just switches but also routers, and application delivery controllers and load balancers, firewalls and VPNs, and of course SD-WAN nodes. All should be reporting on entity behaviors to some central system. Beyond that, in any host-based aspects of the architecture (such as a software-defined perimeter deployment), the agents running on network entities (PCs, virtual servers, containers, SDP gateways, whatever) will also be supplying event streams to some central database for analysis. Ultimately, myriad streams of behavioral data must be brought together, filtered, massaged and correlated as needed to feed the core decision: Has that node (or the user or system on it) gone rogue? Data volumes and event diversity will drive use of AI for analysis Just looking at that list of data streams is exhausting (notwithstanding that it is not exhaustive). In a network of any size, it’s been more than a decade since any of those data streams was something an unaided human could keep track of on even a daily basis (never mind doing so in near real time). And the first several generations of aid brought to bear, as exemplified by legacy SIEM applications, are proving inadequate to the scale and scope of this kind of analysis in a modern environment. The continuing evolution of the threat universe to include more multi-channel, slow-then-fast attack models, coupled with the explosion in numbers of applications, devices, VMs, and containers, makes old-style SIEMs steadily less able to make the normal-versus-anomalous evaluation at the heart of what ZT needs. Zero-trust environments’ need for ongoing behavioral threat analytics (BTA) can only be met through the application of AI techniques, usually machine learning. BTA systems have to be able to track network entities without relying solely on IP addresses and TCP or UDP port numbers, and to have some sense of different classes of entities on the network – e.g. human, software, hardware – to guide their assessment of normal and their thresholds for anomaly. For example, it should be able to flag as anomalous anything requiring a human or a physical device like a laptop or an MRI machine to be in two places at once, or in two physically distant places in very short succession. So, at the core of every ZT environment lies the need for deep visibility into the behavior of every device, person, or system using the network. Without that visibility, ZT environments cannot achieve the dynamic, conditional trust maps that underlie their promise to radically reduce risk. Related content opinion Converge NOCs with SOCs to save time and effort Marrying network operations centers with security operations centers can streamline troubleshooting and reduce duplication of effort. By John Burke May 24, 2023 4 mins SD-WAN Network Monitoring SDN opinion 3 ways network teams can influence SASE decisions Network pros’ input about what SASE platform their enterprise needs should start as early as possible, and that means involvement in SD-WAN choices. By John Burke Apr 05, 2023 4 mins SASE SD-WAN opinion When does SD-WAN make sense? Software-defined WAN offers a lot of potential benefits including price, efficiency, and performance, but it’s not right for all sites. By John Burke Mar 13, 2023 5 mins SD-WAN opinion Multicloud: Keep providers separate and distinct or integrate them? A multicloud infrastructure strategy can maximize the flexibility of enterprise IT staff, isolate workloads, and increase agility, but there may be overriding circumstances. By John Burke Feb 20, 2023 5 mins Cloud Management Hybrid Cloud Cloud Computing PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe