Cables Have Been Designed Wrong for 40+ Years

For decades, the standard approach to designing coaxial and twisted-pair cables has had a fundamental flaw: engineers optimized for impedance at a single frequency rather than across the full signal bandwidth. This article examines the physics behind the problem and what a correct design methodology looks like.

Every time you plug in an Ethernet cable, an HDMI cord, or a USB cable, you are using technology whose design principles were largely established in the mid-20th century. Those principles, it turns out, contain a systematic error that engineers have been propagating for over forty years — not out of laziness, but because the error is subtle and the consequences are easy to overlook.

The Standard Model of Cable Design

The dominant paradigm in cable design is to engineer for a target characteristic impedance — typically 50 Ω for RF coaxial cables or 100 Ω for differential pairs used in Ethernet. Textbooks, standards bodies, and manufacturers all work from this single-number specification. The problem begins when you ask: impedance at what frequency?

In practice, a cable specified as "50 Ω" meets that impedance at one test frequency (often 1 MHz or 10 MHz). But signal integrity is determined by how the impedance behaves across the entire frequency spectrum of the transmitted signal. A modern 10 Gigabit Ethernet signal contains meaningful energy from DC up through ~10 GHz. A cable that is 50 Ω at 10 MHz but drifts to 47 Ω at 5 GHz and 53 Ω at 9 GHz will cause reflections, inter-symbol interference, and effective bandwidth reduction — all of which degrade signal quality in ways that are hard to diagnose.

Where the Error Lives

The core of the problem is in how dielectric materials are modeled. Classic cable design treats the dielectric (the insulating material between conductors) as having a fixed, frequency-independent permittivity ε. Real dielectrics — polyethylene, PTFE, PVC, foamed materials — all exhibit dielectric dispersion: their permittivity changes with frequency. This is not a small effect. For common cable insulation materials, ε can shift by several percent between 1 MHz and 10 GHz.

Since characteristic impedance Z₀ = √(L/C) and capacitance C is directly proportional to ε, a frequency-dependent ε means a frequency-dependent Z₀. A cable designed by fixing ε at one frequency is, by definition, mismatched at all other frequencies. This is the fundamental design error: treating a frequency-dependent parameter as a constant.

Why This Went Unnoticed for So Long

Several factors masked the problem during the decades when it became entrenched:

  • Bandwidth was modest. When cable standards were first codified, signal bandwidths were measured in megahertz, not gigahertz. Over a narrow frequency range, dielectric dispersion is small enough to ignore.
  • Test equipment was limited. Measuring impedance vs. frequency across a multi-GHz span requires a vector network analyzer (VNA). These instruments were expensive and uncommon until relatively recently.
  • Standards perpetuated the approach. Once a standard specifies impedance at a single frequency, manufacturers optimize for compliance with that standard, not for broadband flatness.
  • "Good enough" worked. For many applications, the degraded signal integrity caused by impedance variation is absorbed by error-correction coding, signal conditioning chips, or simply by running at lower-than-theoretical speeds. The cable is blamed for poor performance, but the root cause — impedance dispersion — is rarely investigated.

The Correct Design Methodology

A properly designed cable should maintain flat impedance across the full operating bandwidth. This requires:

  1. Characterize dielectric dispersion first. Before specifying geometry, measure how ε varies with frequency for the intended dielectric material. This measurement should cover the full bandwidth of the target application.
  2. Design geometry to compensate. If ε decreases at higher frequencies (as is typical for most polymer dielectrics), the geometry can be adjusted — for example, by tapering conductor spacing or using a foamed dielectric with controlled density profile — to hold C, and therefore Z₀, constant across frequency.
  3. Validate with broadband VNA measurement. The finished cable should be characterized not at a single frequency but as a continuous impedance-vs-frequency plot. Acceptance criteria should be a maximum impedance deviation across the full band, not a single-point pass/fail.

Practical Consequences

The difference between a conventionally designed cable and a broadband-flat-impedance cable becomes apparent in any high-speed digital or wideband RF application. In data center interconnects running at 25G, 50G, or 100G per lane, impedance variation is a primary source of return loss, which directly limits link reach. In oscilloscopes and test equipment, cable impedance flatness determines measurement accuracy at high frequencies. In 5G base station RF chains, impedance mismatch causes reflected power and nonlinear effects in amplifiers.

Some premium cable manufacturers — particularly those serving the test-and-measurement and aerospace markets — have understood this for years and do produce broadband-flat cables. But these are niche, expensive products. The mainstream cable market, driven by cost and compliance to legacy standards, continues to produce cables designed by the 40-year-old methodology.

What Needs to Change

The fix is not exotic. It requires updating design standards to specify impedance as a function of frequency, not a single value. It requires adding dielectric dispersion measurement to the standard cable characterization workflow. And it requires VNA-based broadband acceptance testing to replace the current single-frequency approach.

None of this is technically difficult. The materials science is well understood, the test equipment is widely available, and the design compensation techniques are straightforward. The barrier is institutional: standards bodies move slowly, manufacturers optimize for existing standards, and engineers are trained on textbooks that perpetuate the single-frequency model.

Until standards catch up, the practical advice for engineers working on high-speed systems is: do not trust a cable's datasheet impedance specification unless it includes a plot of impedance vs. frequency. A cable described only as "100 Ω" tells you almost nothing useful about how it will behave in a 10G or 25G link.

Conclusion

The cable industry has been optimizing for the wrong variable for decades. Characteristic impedance is a useful concept, but treating it as a single number is a simplification that breaks down exactly when it matters most — at the high frequencies that modern communications demand. Correcting this requires nothing more than applying physics we already know, using test methods that already exist, and writing standards that reflect reality. The technology is ready. The question is whether the industry is ready to use it.