SI simulation quality relies on VNA measurement accuracy

As signal transmission speeds increase, signal integrity (SI) engineers increasingly have to make measurements using microwave vector network analyzers (VNAs) to attain the frequency coverage necessary for high accuracy. There are many challenges associated with such tests. One of the most obvious is related to keeping pace with an ever-increasing rate at which high-speed data is transmitted. If SI engineers are part of commercial designs, they have another large factor—striking a balance between cost and performance. The better the measurement accuracy, the greater confidence engineers will have in making cost/performance trade-off decisions. 

This is particularly true when designing backplanes and interconnect devices. To effectively locate design defects in prototypes, ensure measurement and 3-D EM simulation correlation, and properly account for test fixtures and the de-embedding that comes with them, engineers must evaluate key performance variables of a VNA, such as frequency range, time-domain performance, and calibration and de-embedding techniques.


Maximum frequency range

When considering the measurement frequency range, the upper limit most often first comes to mind for conducting measurements on high-speed designs. However, the lower frequency limit of an S-parameter characterization of a backplane or other interconnect also impacts the quality of the data collected and any subsequent modeling. 

Ideally, measurements to the 5th harmonic of the NRZ clock frequency need to be conducted when device modeling. For a 28-Gb/s data rate, this means a 70-GHz stop frequency for an S-parameter sweep. Backplane or other interconnect attenuation of harmonics will distort the signal, hence the need to characterize the frequency response of transmission media to higher frequencies.

Figure 1. Harmonic Content of 28 Gb/s NRZ Clock Signal

This can be seen in Figure 1. The clock frequency of an NRZ signal is half the bit rate; in the case shown, the spectrum of a 14-GHz clock that would be associated with a 28 Gb/s signal. The 3rd and 5th harmonics are readily apparent. To understand how a backplane, or other interconnect, will affect the bit stream, SI engineers need to measure how the backplane attenuates the harmonics. Ideally, measurements up to the 5th harmonic would be made (70 GHz in this example), but at an absolute minimum, up to the 3rd harmonic (42 GHz) should be measured.



Causality is another way to think about the requirement for the upper measurement frequency. When S-parameter data is transformed into the time domain for use in further simulation, causality errors can arise; these are essentially where events appear to result in negative time. This can lead to convergence problems in the simulations and inaccuracies in modeling larger-scale subsystems. 

Causality is simply a statement that the output from an electrical network should occur after the stimulus. Lack of causality, where the output appears to occur prior to the stimulus, can be observed when poor S-parameter data is transformed into the time domain for use in circuit or other simulations. Noncausal S-parameter data can be the reason for unstable simulations that may not converge to a solution or may lead to inaccurate results. One cause of poor S-parameter data is insufficient higher frequency data. For ideal causality, S-parameter data would be available from DC to infinity—not a very practical situation, at least for the upper limit. 

Massaging the frequency-domain data can reduce these problems; however there are potential issues related to distorting the actual physical behavior of the DUT. Therefore, it often is safer and more accurate to use as wide a frequency range as possible up to the point where repeatability and related distortions (for example,  the DUT starts radiating efficiently, making the measurement very dependent on the surroundings) obscure the results. The desire for wider frequency range data becomes more compelling as faster and more complex transients are being studied in the higher level simulations.


Low-frequency impact

Poor causality results in reduced confidence in simulations, potential convergence problems, and inaccuracies. Additionally, poor low-frequency information leading to DC extrapolation errors also degrades model accuracy and leads to poor agreement with 3-D EM simulators.

These factors highlight the fact that the lower frequency bound of the sweep is of equal importance to the high-end frequency. Model accuracy generally improves the closer that data is acquired to DC. To achieve step-response time-domain characterizations of DUTs, it is necessary to use a low-pass processing mode when transforming frequency-domain data. This requires estimation of a DC term.  

Figure 2. DC Extrapolation from Measured S-Parameter Data






Figure 2 shows the impact of noisy S-parameter data and the difference made by capturing data down to lower frequencies. If the VNA measurement performance degrades at low frequencies (that is, below 1 GHz), there will be more uncertainty around the extrapolated DC point. A VNA with superior low-frequency measurement performance, even if only capturing data down to the same frequency, will produce better DC extrapolation due to reduced noise on the measurement. The capability to produce a more accurate DC extrapolation then is further improved if S-parameter data can be collected down to lower frequencies.

The impact of poor DC estimation can be simply illustrated by comparing two time-domain plots from a VNA. Figure 3 shows a step change in the reflection coefficient. Prior to 200 ps, the impedance of the line was 50 and after that 0. With a poor DC extrapolation, the 50- section can clearly be seen to show sloping impedance along its length, whereas with good extrapolation, the 50- line is seen correctly. 

Figure 3. Impact of Poor DC Extrapolation on Time-Domain Results

The second issue connected with choice of fstart is connected with aliasing. VNAs produce discrete frequency-sampled S-parameter data. The sample spacing in the frequency domain is inversely proportional to the maximum unambiguous time (or distance) when converted into the time domain.




Simulated eye patterns

Other types of time-domain modeling, such as simulation of eye patterns, also rely on step response characterizations. DC extrapolation is important in these models as well. 

In addition to the lowest frequency measured, the stability of the low-frequency data also is vital. Noisy and drifting low-frequency data will lead to varying time-domain results, due to changing DC point estimates. 

For example, consider the case where the measured S-parameter data for a backplane is fed into a software model to estimate the impact of that backplane on the eye diagram. Figure 4 shows how the eye-diagram estimate will appear where the low-frequency data has some error. In this example, it was found that a 0.5-dB error distribution at a lower frequency (10-MHz) transmission could take an 85% open eye to a fully closed eye. Since mid-band (10-GHz) transmission uncertainty may be near 0.1 dB, depending on setup and calibration—and higher at low frequencies—this eye distortion effect cannot be neglected.

Figure 4. Eye Diagram Estimate when Low-Frequency Data Has Some Error
Figure 5. Eye Diagram Estimate with Quality Low-Frequency Data













Figure 5 shows what the resulting eye diagram will look like if the low-frequency measurement data is of good quality. This prediction correlates very well with the actual eye diagram measured using an oscilloscope, as shown in Figure 6.

Figure 6. Eye Diagram from Oscilloscope

















Need for time domain 

Sometimes problems are caused by vias, stackup issues, and connector pins. However, frequency-domain data alone is not enough to uncover these issues. It’s necessary to transform that data into the time domain to locate the position of particular problems. As a result, the time-domain performance of a VNA is critical when trying to locate defects on a backplane or cable. VNA time-domain results are converted from frequency-domain results into the time domain using the Fourier transform method. Specifically, a variation of the chirp z inverse transform is used. This improves the capability to locate discontinuities, impedance changes, and crosstalk issues. 

Lack of good low-frequency S-parameter data can lead to further complications when converting into the time domain, either for measurement of impedance changes along a line or for modeling. Resolution is maximized when the low-pass time-domain mode is used. This mode also permits characterization of impedance changes on the backplane. Low-pass mode requires a quasi-harmonically related set of frequencies that start at the lowest frequency possible. A DC term is extrapolated that provides a phase reference so the true nature of a discontinuity can be evaluated. Hence, the lower the start frequency, the better the extrapolation of the DC term.

A rule of thumb defines resolution on the order of 150 mm (for air dielectric) divided by the bandwidth (GHz). The broader the bandwidth, the more information that will be presented during the time-domain analysis. Assuming the start frequency is, for all intents and purposes, DC, then the bandwidth is equal to Fstop. Of course, if the DUT is band-limited, then the resulting time-domain resolution may be limited by the bandwidth of the DUT. 

Figure 7. Resolving Two Discontinuities with 40-GHz (1), 50-GHz (2), and 60-GHz (3) Spans

The improvement in resolution by capturing S-parameter data over a wider frequency range can be seen in Figure 7. This shows the improvement in time/distance domain resolution as the span is increased from 40 GHz (1) to 50 GHz (2) and then 60 GHz (3) when viewing two discontinuities that are 2 mm apart.



There are many situations where it may not be possible to connect directly to the DUT. When this occurs, it is necessary to de-embed the DUT from the surrounding test fixtures. 

The opposite is sometimes required: It may be useful to assess the performance of a device when it is surrounded by other networks. However, many passivity and causality problems are due to poor calibration and de-embedding methods. In addition, high fixture loss may affect the accuracy and repeatability of de-embedding. 

It is important to take into account the characteristics of the fixtures being de-embedded when choosing a de-embedding technique. For example, is it a two- or four-port network; are the input and output side fixtures symmetrical; are the lines coupled or uncoupled? The type of fixture will determine the optimum technique.



Higher data rates require accurate measurements to provide the confidence needed to make performance/cost decisions when developing today’s backplanes and interconnects. Measurement tools must help shorten design times and ensure stable SI in mass production. 

VNAs play a key role in helping the SI engineer meet the challenges of increasing data rates, make appropriate cost/performance trade-offs, achieve correlation between simulations and measurement, and extract the effect of fixtures. When selecting a VNA, you should evaluate characteristics such as upper and lower frequency limits, performance in the time domain, and a wide selection of advanced calibration and de-embedding techniques.


About the author

Bob Buxton is marketing manager of the General-Purpose Business Unit at Anritsu. He received a M.Sc. in microwaves and modern optics from University College London and an M.B.A. from George Fox University.

More in Instrumentation