Drive-test surveys back in spotlight
The ongoing reconfiguration of the 800 MHz band, i.e., rebanding, has created renewed interest in the drive-test survey. Rebanding requires modifications
April 1, 2006
The ongoing reconfiguration of the 800 MHz band, i.e., rebanding, has created renewed interest in the drive-test survey. Rebanding requires modifications to both network infrastructure and user radios, so there always is some risk that the rebanded system will not match the performance of the original. The licensee naturally wants proof the two systems are equivalent, especially in terms of geographical coverage. One way to verify equivalent coverage is the drive-test survey.
Because drive testing is labor intensive and expensive, it should not be done haphazardly (see story on page 8). One must employ accurate, efficient and thorough collection methods to ensure the results are unambiguous.
All rebanding projects require a thorough set of repeater site measurements before and after rebanding. These measurements should not be abandoned simply because drive-test surveys will be conducted. We don’t want to discover a problem with the survey, be forced to correct the problem and redo the survey when the problem should have been discovered at the outset.
The fundamental elements of the drive-test survey are the receiver system, the drive routes and the methods to ensure reproducibility. The receiver should employ an omnidirectional antenna, its sensitivity should be equal to or better than the user radio, it must be accurate (+/- 1.5 dB) and it should have high dynamic range. The receiver should also be computer-controlled and include GPS data-logging. The service area boundaries should be well-defined, and the drive routes should be developed prior to conducting the survey. Ideally, measurements should be collected on a uniform grid, but alternatively, randomly distributed data can be plotted to a uniform grid during post-processing.
One must drive a dense grid that includes both thoroughfares and side streets. To ensure the results are reproducible (in a statistical sense), the identical receiver system should be used for each survey. In addition, the surveys should be done during the same time of year, and identical routes should be driven.
Now that we have identified the fundamentals, let’s consider what we are measuring. The mobile radio channel is rarely line-of-sight, and the received signal is the sum of many reflected and diffracted signals. The term multipath fading is used to describe the time-varying amplitude and phase that characterize the composite signal at the receiver.
These fluctuations are usually modeled as Rayleigh fading with Rayleigh-distributed amplitude and uniformly distributed phase [2]. Figure 1 is a plot of amplitude versus time for a typical Rayleigh fading mobile radio channel.
Mobile and portable receivers are usually specified to operate with a minimum local mean in the presence of Rayleigh fading. Thus, for the survey to be a useful indicator of receiver performance, we should measure the local mean, not the instantaneous signal. Estimating the local mean requires that we average subsample measurements of the instantaneous signal over some distance. The preferred minimum distance is 40 wavelengths, as it adequately smooths the Rayleigh fading [1], [3]. Long averaging distances tend to include changes in the local mean due to location variability and are therefore not desirable. However, there is no ironclad rule on the maximum averaging distance.
A minimum number of subsamples is required to get an accurate estimate of the local mean within the averaging distance (again, a minimum of 40 wavelengths). The usual rule of thumb is 50 subsamples as this number ensures a 90% confidence interval of +/- 1 dB if the amplitude is Rayleigh-distributed [1]. (There appears to be an error in the equation on page 123 of TSB-88-B. See page 90 of TSB-88-A for the correct expression.)
Drive-test measurements are random variables and one should not assume that measurements taken at the same location on two different days will be identical. There are simply too many variables beyond our control. There is, of course, the measurement tolerance of the test receiver, but even a perfect receiver cannot control the time-varying environment surrounding the receiver. Variations between measurements at discrete locations are normal and do not necessarily indicate a problem with the rebanded system.
Rather than compare discrete locations, one should compare performance using an area-wide metric, specifically the service area reliability [1]. The service area reliability is the probability that a particular location, picked at random, will have adequate service. Adequate service typically is defined as a measured signal above a threshold, say -99 dBm.
The service area reliability is estimated by computing the ratio of the number of measured locations above the threshold to the total number of locations measured, as depicted by Equation 1.
We now know how to collect measurements and what performance metric to use, but what is the minimum number of measurements required to ensure an accurate estimate?
To answer this question, we first model each measurement sample as an independent trial with probability of success, p, where p is the probability that the measurement is above the service threshold. (Remember that the measurement sample is actually a linear average of at least 50 subsamples collected over at least 40 wavelengths.)
The number of successes in n trials is a binomial random variable that we will designate x. If we conduct an experiment with n trials and observe x successes, the point estimate for p is simply x/n.
However, a point estimate alone tells us nothing about the accuracy of the estimate. What we really need is a measure of confidence that the point estimate, x/n, resides in a small interval around p. For our application, an appropriate confidence level is 90% and a reasonable confidence interval is +/- .02. In other words, we want to know the required number of samples to ensure the estimate is inside the confidence interval of +/- .02 with a confidence level of 90%.
Using the normal approximation to the binomial distribution, one can show an approximation of the required minimum number of samples is given by Equation 2 [4], [5].
However, Equation 2 is not entirely satisfactory because it includes the parameter we want to estimate, p. However, the product p(1-p) will always be less than or equal to ¼. Thus, the worst-case minimum value of n is calculated using Equation 3.
For zα/2=1.65 and d=+/-.02, we find n=1702. Thus, we require at least 1702 samples to achieve the required confidence level and confidence interval. Because most surveys result in some bad data that cannot be used, the survey should allow for somewhat larger sample sizes, say n=1750. Note that this value corresponds to the number of uniformly collected or uniformly plotted measurements.
If before and after measurements are taken at the same time of year using the same test receiver and antenna, the measured service area reliability should be reproducible within a range equal to twice the confidence interval. Why twice the confidence interval? Because each value of measured service area reliability is only an estimate of the actual service area reliability. For example, let’s assume a 90% confidence interval of +/-2% and an actual service area reliability of 95% (known only to the omniscient). If the pre-rebanding service area reliability estimate is 97% and the post-rebanding service area reliability estimate is 93%, both are within the confidence interval centered on the actual value, but they are not within +/- 2% of each other.
If, after collecting the pre-rebanding measurements, the service area reliability is found to be high (e.g., > 95%), one might be tempted to retain the number of plotted grid measurements, n, but narrow the 90% confidence interval, d, using Equation 2. We don’t recommend this approach as it tightens the equivalency requirement so much that other variables not considered explicitly in the statistical analysis may skew the results.
Jay Jacobsmeyer is president of Pericle Communications Co., a consulting engineering firm located in Colorado Springs, Colo. He holds bachelor’s and master’s degrees in Electrical Engineering from Virginia Tech and Cornell University, respectively, and has more than 20 years experience as a radio frequency engineer.
References:
EIA TSB-88-B, “Wireless Communications Systems – Performance in Noise and Interference-Limited Situations, Recommended Methods for Technology-Independent Modeling, Simulation and Verification,” With Addendum 1, May, 2005.
W.C. Jakes, ed., Microwave Mobile Communications, IEEE Press Reissue, 1994.
W. C. Y. Lee, Mobile Cellular Telecommunications Systems, McGraw-Hill, 1989.
R. J. Larsen, M. L. Marx, An Introduction to Mathematical Statistics and its Applications, Prentice-Hall, 1986, pp. 281.
C. Hill and B. Olson, “A Statistical Analysis of Radio System Coverage Acceptance Testing,” IEEE Vehicular Technology Society News, February, 1994.
J.M. Jacobsmeyer and G.W. Weimer, “Guidelines for Conducting Drive Test Surveys for 800 MHz Rebanding,” October 1, 2005. Available at
Equation 1
Service Area Reliability(%)= Tp/Tt 100%
where
Tp is the total number of grid points passed (e.g., those where C > -99 dBm)
Tt is the total number of grid points measured
Equation 2
n= Z2α/2p(1 – p)/d
where zα/2 is the argument of the unit normal distribution for a confidence of 1-αα and d is one-half of the confidence interval [4], [5]. For example, for 90% confidence, zα/2 = 1.65.Iquisl ut lummy num iuscidunt acidunt vent landre magnissectet alit, vullam, consed dolorerat vel utat. Tie tat lutat, quisis
Equation 3
n=Z2α/2/4d2