Network visibility has become as much a budget problem as a technical one. IT budgets stay flat or shrink, but organizations still need comprehensive network and security monitoring. The result is a collection of monitoring tools that miss critical spots, pushing teams into constant firefighting instead of getting ahead of problems.
To understand how vendors are tackling these visibility gaps, The Tolly Group recently spoke with Chris Labac, Vice President and General Manager, Network Performance and Threat Solutions, VIAVI, along with Yury German, Security Product Line Manager. The conversation revealed how NetOps and SecOps convergence is changing to handle modern hybrid environments and blur the lines between network and security teams.
The Budget Reality of Network Visibility
Visibility challenges are not new, but they have intensified. "I don't know that there's necessarily any more or less blind spots in the network. I think it's always been budgetary dependent to some degree," Labac explains. Small organizations with tight budgets have always used flow-based tools and infrastructure performance management. Larger enterprises could afford packet-level monitoring.
Today's market offers alternatives. Application performance monitoring tools like Dynatrace and observability platforms like Datadog gained popularity by leveraging existing infrastructure. Digital experience monitoring focuses on endpoints and synthetic testing. These tools do what they are designed to do, but they miss things that traditional network performance monitoring catches.
"They typically don't have as granular a view of what the network provides," Labac notes. When organizations focus on applications instead of network infrastructure, they miss details that matter when troubleshooting complex performance problems or tracking down security incidents.
Logs Versus Packets: The Security Perspective
From a security angle, the gap between log-based and packet-based visibility gets even bigger. "Numerous companies settle for the mantra 'it's good enough', 'We collect logs, consider them sufficient, and rely on them to solve our problems,'" German explains. For smaller organizations, logs may suffice for basic security monitoring.
But logs have problems that packets do not. "It's all about the source of the data you have, and what data you're missing versus packets, which give you everything," German notes. Worse yet, attackers can change logs. "Reflecting on my experience, the top rule for staying undetected is to change or delete the logs. Adversaries often try to tamper with logs to keep their actions hidden."
So, the real question becomes: if logs are incomplete, delayed, and potentially manipulated—and full packet capture isn't always practical at scale—what level of network telemetry actually provides enough context to investigate, validate, and reconstruct security events with confidence?
This is not a hypothetical concern. Recent security vulnerabilities in firewalls and other security infrastructure have shown how quickly traditional telemetry sources can become unreliable when the tools generating logs are compromised. Logs and flow data remain essential for scalable visibility, historical context, and early threat detection—but their effectiveness depends on the completeness and integrity of the data. As organizations move toward NetSecOps convergence, where network and security teams rely on shared telemetry and workflows, these gaps become harder to ignore.
The gap becomes clear during incident response. In a converged NetSecOps model, teams align quickly around a common understanding of what occurred. Logs and flow can help scope an incident and build timelines, but investigations ultimately require confidence in what actually happened. When teams need to validate alerts, reconstruct attacker activity, or support compliance and reporting, higher-fidelity evidence is critical. Packet data provides immutable ground truth that attackers cannot easily alter, enabling definitive root-cause analysis, accurate impact assessment, and coordinated response across NetOps and SecOps.
The Observer Platform Architecture
VIAVI's approach centers on the Observer platform, which has kept the same basic architecture since 1994. Instead of treating things as separate silos, the design treats different data sources as inputs to one unified system.
"We've architected it to where our packet capture appliances at whatever speeds they support, whatever data retention levels they support, to us are really a data source," Labac explains. The Observer GigaStor product line does pure packet capture, from 24 terabytes to as many as 5 petabytes of storage. The flow solution goes beyond basic IPFIX. It works as a telemetry probe, pulling in log files from firewalls, load balancers and other sources including SNMP, user identity, session logs, device info, and cloud metadata—into a single, high-fidelity record to add context.
This all feeds into Observer Apex, the UI layer where enriched metadata and integrated threat intelligence come together into unified views of network performance, application behavior, and user experience. Because of how the platform is built, VIAVI can add capabilities through software updates instead of making customers replace hardware.
Organizations that have invested in packet capture infrastructure can extend its value through software rather than facing forklift upgrades. The Observer platform integrates with existing infrastructure to fill visibility gaps.
Breaking Down the Operations Silos
Network and security operations are converging, at least in the tools they rely on. "The convergence of network and security operations in terms of organizational structure, is still a way off," Labac acknowledges. "But when I talk about NetSecOps convergence, I'm really referring to the toolsets teams use to solve their problems."
In practice, this means NetOps and SecOps increasingly depend on shared telemetry to investigate performance issues and security incidents side by side. Rather than operating in silos with separate data sources, converged toolsets allow both teams to work from the same evidence—whether that's logs, flow, or packets or metadata.
The organizational structure may lag, but the operational reality is forcing change. Companies are figuring out that keeping network and security tools separate slows investigations and wastes time. "When we talk to customers, we ask how that model actually works when they're trying to solve a network or security issue," Labac notes. "They tell us they're constantly going back and forth between teams just to pinpoint where the problem really is."
This friction costs organizations time and money. Troubleshooting sessions devolve into finger-pointing between departments while incident response gets delayed. The organizational silos may persist, but the tools are beginning to recognize that the underlying data needs to serve both teams simultaneously.
This pushes demand for platforms that work for both teams. Network operators need security context. Security teams need network forensics. When data resides in one place, problems get solved faster.
VIAVI's answer to this challenge is Observer Threat Forensics, launched to bridge the gap between endpoint detection and network-layer evidence. The solution correlates threat intelligence powered by CrowdStrike® with packet and flow data in real time, automatically enriching alerts with Indicators of Compromise (IOC) and adversary intelligence for enhanced context. Each alert links directly to forensic evidence, eliminating the manual correlation work that typically delays investigations. The platform also enables retrospective analysis back to initial compromise, giving teams visibility into threats that occurred prior to detection or alerting.
The Path Forward for Network and Security Visibility
VIAVI's platform approach puts packet capture, flow data, and telemetry under one management layer, giving organizations comprehensive visibility without needing separate tools for everything.
The broader shift happening in network visibility reflects changing operational realities. The lines between network operations, security operations, and application performance management are blurring not because of organizational mandates, but because the problems these teams face increasingly require data from all three domains. Tools that acknowledge this reality and provide unified data access have an advantage over point solutions that require extensive integration work.
For organizations evaluating network observability options, the question is no longer whether to invest in comprehensive visibility, but rather how to implement solutions that serve both operational and security needs without fragmenting workflows.
Key Takeaways
Budget constraints drive organizations toward "good enough" monitoring that creates visibility gaps
Log data provide a useful starting point but enriched flow, combined with packet capture, delivers more complete context and forensic evidence for security investigations
Unified platforms that combine packet capture, flow data, and metadata reduce operational silos
Integration between network visibility and endpoint security platforms fills coverage gaps for both teams
Learn More
For detailed information about VIAVI's Observer platform and network observability solutions, visit: https://www.viavisolutions.com/en-us/enterprise/solutions/network-performance-monitoring or connect with Chris Labac on LinkedIn.
