When Packets Can't Wait: Comparing Protocols for Delay-Sensitive Data

In Diagnosing Video Stuttering Over TCP, we built a diagnostic framework—identifying zero-window events (receiver overwhelmed) versus retransmits (network problems). In The TCP Jitter Cliff, we discovered that throughput collapses unpredictably when jitter exceeds ~20% of RTT, and the chaos zone makes diagnosis treacherous.

The conclusion from both posts is clear: TCP is inappropriate for delay-sensitive streaming. Its guaranteed-delivery model creates unbounded latency during loss recovery. When a packet is lost, TCP will wait—potentially forever—rather than skip ahead. For a live video frame or audio sample, arriving late is the same as not arriving at all.

But “don’t use TCP” isn’t a complete answer. What should you use? The protocol landscape for delay-sensitive data is vast—spanning media streaming, industrial automation, robotics, financial messaging, and IoT. Each protocol answers the fundamental question differently.

Continue reading →

Protocol Reference: Transport for Data with Deadlines

This is a quick-reference list of protocols for applications where data has a deadline, and late delivery is failure—whether that deadline is 10μs (servo loop), 20ms (audio buffer), or 300ms (video call).

For taxonomy, analysis, and context, see When Packets Can’t Wait. This page is just the inventory—a living reference that grows as protocols become relevant to JitterTrap development.

Continue reading →

The Jitter Cliff: When TCP Goes Chaotic

In Part 1, we used “video over TCP” as a stress test for TCP’s behavior—examining how zero-window events, retransmits, and the masking effect reveal what’s happening inside a struggling connection.

But during those experiments, I discovered TCP throughput degraded rapidly as jitter worsened. While I knew that packet loss would destroy TCP throughput, I hadn’t quite expected the jitter-induced cliff.

At a certain jitter threshold, throughput collapses so severely that measurements become unreliable. Single tests can vary by over 100%. This “chaos zone” makes diagnosis treacherous: the same network conditions can produce wildly different results depending on when you measure.

This post explores TCP’s behavior under jitter and loss, comparing CUBIC and BBR. It’s common knowledge that TCP is inappropriate for delay-sensitive streaming data, and this post will try to demonstrate how and why.

Continue reading →

Diagnosing Video Stuttering Over TCP: A JitterTrap Investigation

Your security camera feed stutters. Your video call over the corporate VPN freezes. The question isn’t whether something is wrong—that’s obvious. The question is what is wrong, because the fix depends entirely on the diagnosis.

Is the problem the sender, the network, or the receiver? These require fundamentally different interventions. Telling someone to “upgrade their internet connection” when the real issue is their overloaded NVR is worse than useless—it wastes time and money while the actual problem persists.

Sender-side problems—where the source isn’t transmitting at the expected rate—are straightforward to detect: compare actual throughput to expected throughput. The harder question is distinguishing network problems from receiver problems when data is being sent. TCP’s built-in feedback mechanisms give us the answer.

UDP is the natural transport for real-time video—it tolerates loss gracefully and avoids head-of-line blocking. But video often ends up traveling over TCP whether we like it or not. VPN tunnels may encapsulate everything in TCP. Security cameras fall back to RTSP interleaved mode (RTP-over-TCP) when UDP is blocked. Some equipment simply doesn’t offer a choice.

The research question driving this investigation: Can we identify reliable TCP metrics that distinguish network problems from receiver problems?

Through controlled experiments, I found the answer is yes—with important caveats. This post builds a complete diagnostic framework covering all three problem types, with the experiments focused on the harder network-vs-receiver distinction. Part 2 will explore what happens when TCP goes chaotic.

Continue reading →

Linux Network Configuration: A Decade Later

In 2014 I wrote about the state of Linux network configuration, lamenting the proliferation of netlink libraries and how most projects hadn’t progressed past shell scripting and iproute2. I concluded that “there is a need for a good netlink library for one of the popular scripting languages.”

A decade later, that library exists. More importantly, the ecosystem has matured enough that every major language has a credible netlink option - and production systems are using them.

To compare them, I’ll use the same example throughout: create a bridge, a network namespace, and a veth pair connecting them.

Continue reading →

The Curious Case of the Disappearing Multicast Packet

Many consumer and industrial devices—from home security cameras to the HDMI-over-IP extenders I investigated in a previous post—are designed as simple appliances. They often have hard-coded, unroutable IP addresses (e.g., 192.168.1.100) and expect to live on a simple, isolated network. This becomes a major problem when you need to get their multicast video or data streams from that isolated segment to users on a main LAN. My goal was to solve this with a standard Linux server, creating a simple, high-performance multicast router without a dedicated, expensive hardware box.

Can a standard Linux server act as a simple, kernel-native multicast router for devices with hard-coded, unroutable IP addresses? This article chronicles an investigation into combining nftables SNAT with direct control of the Multicast Forwarding Cache (MFC).

Continue reading →

The DSP Lens on Real-World Events, Part 1: A New Dimension for Your Data

The Universal Challenge: From Events to Insight

Every field that deals with streams of events over time shares a common challenge. A factory manager tracking items on a conveyor belt, a data scientist analyzing user clicks, and a network engineer monitoring packets are all trying to turn a series of discrete, often chaotic, events into meaningful insight. The tools may differ, but the fundamental problem is the same: how do we perceive the true nature of a system through the lens of the events it generates?

My own journey into this problem began with a network analysis tool, JitterTrap. I was seeing things that shouldn’t exist: a prominent, slow-moving wave in my network throughput graph that my knowledge of the underlying system said couldn’t be real. This “ghost” in the machine forced me to look for answers in an unexpected place: the field of Digital Signal Processing (DSP).

This three-part series shares the lessons from that journey. Part 1 introduces the core DSP mental model as a powerful, universal framework. Part 2 lays out a theoretical framework for analysis, and Part 3 applies that framework to perform a rigorous, quantitative analysis of a real-world measurement problem.

Continue reading →

The DSP Lens on Real-World Events, Part 2: A Framework for Analysis

Introduction: The Rosetta Stone

In Part 1, we introduced the “ghost in the machine”—an aliased signal created by the very act of measuring a stream of discrete events. We established that any time-bucketed analysis is a form of filtering. But to truly understand our measurements and build a trustworthy instrument, we need a more formal framework.

This article is that framework—a “Rosetta Stone” to connect the different concepts at play. Before we can analyze our system quantitatively in Part 3, we must first understand the precise relationship between our sample rate, the frequencies we want to observe, and the characteristics of the filter we are unknowingly using.

Continue reading →