# Test Time and Estimating Bit Error Rate

Quote of the Day

Our heads are round so our thoughts can change direction.

— Francis Picabia

## Introduction

Figure 1: Production test system. Time on these systems is costly and needs to be kept as short as possible. (Source)

Test time is expensive. Since our products need to conform to industry standards for Bit Error Rate (BER), we need to test for BER. It is important that we test long enough to ensure that we meet the requirements, yet not so long as to spend more money than we need to.

I was asked to develop a rational approach for determining the amount of test time required. I put together our current procedures years ago, but now they need to be refreshed as we prepare to offer newer, higher speed transports. While I was reviewing these procedures, I saw that the analysis required was interesting and thought I would document it here. Our procedures are based on a couple of papers from Maxim and Lightwave Magazine. In this blog, I generate a Mathcad model of BER based on the results of these papers and examine a couple of transport examples.

For testing purposes, a number of bits must be transferred with the number of errors less than a given amount to provide sufficient confidence of meeting the BER requirement. This analysis will compute the number of bits that must be transferred with less than a given number of errors to provide sufficient confidence that we are meeting the BER requirement. The test time is computed by multiplying the number of bits by the transfer rate.

## Analysis

### Definitions

As with most technical discussions, it is important to get your terms defined upfront.

Bit Error Rate (BER)
BER is the ratio of the number of bit errors to the total number of bits transferred over an infinite time interval. Mathematically, we can express this definition as $BER\triangleq \underset{n\to \infty }{\mathop{\lim }}\,\frac{\varepsilon (n)}{n}$, where n is the number of bits transferred and the ε is the number of errors among those n bits.
Confidence Interval (CI)
The confidence interval is a particular kind of interval estimate of a population parameter and is used to indicate the reliability of an estimate. It is an observed interval (i.e it is calculated from the observations), in principle different from sample to sample, that frequently includes the parameter of interest, if the experiment is repeated. The frequency that the observed interval contains the parameter is determined by the confidence level or confidence coefficient.
Confidence Level (CL)
Confidence level refers to the likelihood that the true population parameter lies within the range specified by the confidence interval. In this case, the confidence interval is in the range from 0 to the specified BER limit. For example, a 99% confidence limit tells us that for a given sample size and number of bit errors, 99% of the time the true BER is within the confidence interval. Mathematically, we can express this definition as $CL\triangleq P\left( BER<\gamma |\ \varepsilon ,\ n \right)$, where γ is the confidence limit.

### Modeling

Equation 1 gives us the probability of have N or fewer events for test described by a binomial distribution.

 Eq. 1 $P\left( \varepsilon

where Cn,k is the number of combinations of n items taken k at a time and n is the number of samples.

When the probability p is small and the number of observations is large, binomial probabilities are difficult to calculate. Fortunately, the binomial probability distribution in this case is well approximated by the Poisson distribution. For those who want more details on this approximation, please check out one of the web sites that demonstrates the validity of this approximation. Figure 2 shows the results of substituting the Poisson distribution for the binomial distribution.

 Eq. 2 $P\left( \varepsilon \le N \right)=\sum\limits_{k=0}^{N}{{{C}_{n,k}}}\cdot {{p}^{k}}\cdot {{q}^{n-k}}=\sum\limits_{k=0}^{N}{\frac{{{\left( np \right)}^{k}}}{k!}\cdot {{e}^{-n\cdot p}}}$

We can relate the confidence level CL to the Poisson distribution as shown in Equation 3.

 Eq. 3 $CL=1-P\left( \varepsilon \le N \right)\Rightarrow \sum\limits_{k=0}^{N}{\frac{{{\left( np \right)}^{k}}}{k!}\cdot {{e}^{-n\cdot p}}}=1-CL$

We can manipulate Equation 3 to form Equation 4, which is convenient for use with Mathcad's nonlinear numerical solver.

 Eq. 4 $-n\cdot p=\ln \left( 1-CL \right)-\ln \left( \sum\limits_{k=0}^{N}{\frac{{{\left( np \right)}^{k}}}{k!}} \right)$

Note that the number of bits required (n) does not vary with transport speed.

## Worked Example

Figure 2 is a screenshot of my Mathcad spreadsheet that I used to work this example. In Figure 2, the variable α represents the number of bits that must be transferred with a given number of errors to meet the required CL.

Figure 2: Illustration of BER Calculation in Mathcad.

For the example worked here, I assume that

• CL = 99%
This is a pretty strict standard. Some folks use 90%, others 60%. The higher the CL you require, the more time you must spend testing.
• Maximum allowed BER of 1E-10
This reflects GPON requirements. Other transports have different requirements.
• Test time is computed by multiplying the required number of bits transferred by the bit time $\left( {{T}_{Test}}=\frac{n}{{{f}_{DateRate}}} \right)$.

I computed results for 1 Gigabit Ethernet (fDataRate = 1.25 Gbps) and GPON (fDataRate = 2.488 Gbps). Table 1 summarizes my results.

 Number of Errors Total Bits Transferred Test Time @ 1.25 Gbps (sec) Test Time @ 2.488 Gbps (sec) 0 4.61E10 36.84 18.51 1 6.64E10 53.11 26.68 2 8.41E10 67.25 33.79 3 1.00E11 80.36 40.37 4 1.16E11 92.84 46.64

## Conclusion

I derived an expression for the number of bits that must be transferred to provide a given level of confidence for having achieved a specified BER. One can see from my example that achieving a 99% confidence level requires a lot of test time. Since test time can cost hundreds of dollars per hour, you can see how the costs add up quickly.

Save

Save

This entry was posted in Electronics, Fiber Optics and tagged , , . Bookmark the permalink.

### 9 Responses to Test Time and Estimating Bit Error Rate

1. Joel says:

In table 1, what are the units for test time? Must be hours our you wouldn't say it was a long time.

2. mathscinotes says:

Actually, the test time is in seconds. The test time cost is on the order of $300 per hour, so about$5 per minute. Since some of the testing takes 90 seconds, we are talking about paying \$7.50 for one test. That is a lot of money. To reduce the cost, we end up shortening the time interval and reducing our confidence level.

Also, I added seconds to the test time columns on the table. Thanks for the good catch.

3. christian says:

Thanks for your post. It's really helpful. I noticed that Eq. 4 only depends on the product n times p. So your results for p=10^-10 can be easily scaled to other error probabilites. If n=4.61x10^10 with p=10^-10, then np=4.61. Looking at it this way makes your results much more useful since the test times can be evaluated for any error probability and data rate.

• mathscinotes says:

Excellent observation! I will update the post to include this point.

Thanks

Mathscinotes

4. Bill says:

I have been trying to find/develop/use such a cookbook formula to calculate the packet loss ratio where test packets are introduced at a lower data rate then the channel capacity and would not interfere with the user traffic. From my understanding your calculations are based on sending test data at full channel capacity. One cannot linearly scale the test data rate and conclude that it is a representative test sample of the channel under test. (e.g. for a channel whose capacity is 10,000 packets per second, sending 10,000 packet per second test data for 1 sec is not equivalent to 1000 packets per second for 10 seconds)

5. ajith says:

How can we model the BER test as poission distribution as any random sample(test) is not independent of previous sample(test), As in intial sample( test) will have less the pk-pk random jitter compare to later sample(test) as random jitter is bounded and it will be keep increasing.

6. ajith says:

How can we model the BER test as poission distribution as any random sample(test) is not independent of previous sample(test), As in intial sample( test) will have less the pk-pk random jitter compare to later sample(test) as random jitter is unbounded and it will be keep increasing.