Discussion on Measurement of Field Strength by Spectrum Analyzer

Spectrum analyzer is a widely used signal analysis instrument. It can be used to measure the signal's frequency, level, waveform distortion, noise level, spectral characteristics, etc. plus the standard antenna can also be used to measure the field strength. Its main features are: It can scan continuously in wide band and display the measured signal visually on the CRT screen. In the entire frequency band, the level display range is greater than 70dB. It is convenient to see the spectrum occupancy and signal activity in the radio wave measurement. Therefore, in many occasions, the spectrum analyzer is replacing the field strength meter to become a new type of radio wave measurement. Widely used instruments. However, there are differences in the design of the two. Therefore, all aspects of the use should be the same, otherwise it will bring about a large measurement error.

First, the level of the scale conversion and impedance matching problems Usually, the spectrum analyzer display scale unit is dBm, but in the discussion of field strength measurement and related radio wave propagation problems, it is customary to use dBμv/m as a unit, so there is a unit conversion first problem. In fact, the field strength measurement is the measurement of the standard antenna-end induced voltage. Therefore, as long as the reading of the spectrum analyzer is converted into voltage units, the antenna strength of the antenna can be used to obtain the measured field strength.

The unit conversion factor of the spectrum analyzer differs depending on the input impedance. For a 50Ω system, VdBuV=PdBm+107dB, and for a 75Ω system, VdBuV=PdBm+108.8dB.

Most modern spectrum analyzers use microcomputer processing, and the display scale can be automatically converted. In the actual measurement, special attention should be paid to the matching between the antenna impedance and the test system to avoid mismatch error. Since the spectrum analyzer is used for wideband scanning, the antenna requirements are all broadband antennas, and the VSWR of the broadband antenna is generally large. If the spectrum analyzer is not connected to a matching antenna, the antenna factor of the antenna used must be renewed. Proofreading.

In the actual measurement, the input attenuator should not be placed in the 0dB position. If the attenuator is set to 0, the input signal is directly connected to the mixer, the impedance characteristics become worse, resulting in a larger mismatch error.

Second, to prevent the spectrum analyzer overload General test receiver input has a tuned high-level circuit to suppress out-of-band signals, improve sensitivity.

The spectrum analyzer is generally directly connected to the first mixer because of its broadband continuous fast scanning feature. When the signal level is high, the mixer operates in a non-linear frequency conversion state, high-order intermodulation and mixing gain compression will occur, and an excessively high level (generally greater than 5 dBm) will burn out the mixer, so it is in use. The rational choice of RF attenuator to ensure linear operation.

In order to make the mixer perform linear frequency conversion, the IF amplifier performs linear amplification to minimize the spurious response level appearing on the oscilloscope screen. This requires the input signal power to be added to the mixer to be as small as possible. Expanding the dynamic range of the measurement level requires that the larger the input power, the better. For this purpose, there are three provisions for the selection of the input signal level:

(1) Optimal input signal level When the signal is input to the spectrum analyzer input mixer, the input signal level at which the generated distortion level is less than a specified level is called the optimal input level. It varies depending on the configuration of the mixer. Usually, the optimum input level of the analyzer is -30 dBm. With this level input, the specified distortion level and spurious response level produced by the spectrometer is less than -90 dBm, ie the signal appearing between -30 dBm and -90 dBm is a true signal. At this time, the display has a dynamic range of 60 dB.

(2) The linear input signal level, the input signal mixer's characteristic to keep the linear maximum input signal level is called the linear input level.

The so-called "linear" means that the input mixer is allowed 1 dB of gain compression. Gain compression of 1dB produces approximately 12.2% error. When the signal level applied to the mixer is within the linear input level range, the gain compression is less than 1 dB, which does not mean different distortion response and spurious response on the spectrum analyzer display. Only when the power of the signal input to the mixer is equal to the optimum input level, does the false response appear on the oscilloscope screen. In general, the spectrum analyzer's linear input level is -5dBm to -10dBm, depending on the characteristics of the input mixer.

(3) Maximum input level;

The burn-in level input to the analyzer is called the maximum input level of the analyzer. It is determined by the characteristics of the input attenuator and mixer. The typical value of the burn-in level of the input mixer is +10dBm, and the burn-in level of the input attenuator is +30dBm.

In actual measurements, the best input level should always be used to make the measurement undistorted or to minimize false response levels. As far as the input is a single large signal. Using the best input level will result in more satisfactory measurement results. However, when there are multiple high-level signals at the input, even if these signals may be outside the spectrum analyzer's operating band, the input is not selective. The superposition of these signal powers can easily overload the mixer and generate high-order interactions. Distortion is adjusted to produce a false response. Therefore, it is necessary to attenuate the signal power other than the measured signal. The best method is to add a tracking filter, ie, a preselector, such as U.S. HP and West German R/S. A preselector for its spectrum analyzer.

Some spectrum analyzers do not have a matching preselector, but a fixed bandpass filter can be added based on the measurement frequency band. At this time, the spectrum analyzer and the tracking signal generator are used to carefully measure the in-band fluctuations and insertion loss and record them one by one, and calculate the antenna correction factor when measuring the field strength. If there is no band-pass filter, a suitable high-pass filter can be configured for the frequency band to be measured. Practice has proved that strong radios and electromagnetic interference are mostly concentrated in the medium and short-wave and FM bands, VHF low-end, after the use of high-pass filter, the signal below the measured frequency band can be attenuated more than 40dB, which can greatly reduce the intermodulation, intermodulation distortion.

To verify that the mixer is operating at its best, one can use a 10 dB increase in the RF attenuator and a 10 dB reduction in display verification. Usually, -30~-35dBm is the best working state of the mixer, that is, the best input level of the spectrum analyzer is -30~-35dBm. The selection of the optimal input level lays a good foundation for further accurate measurements.

Third, choose the appropriate IF bandwidth analyzer spectrum analyzer IF bandwidth (also known as resolution bandwidth) a lot, from 1MHz to 1kHz below about 10 stalls.

However, due to the continuous scanning characteristics of the spectrum analyzer, its filter is a Gaussian type with a large rectangular factor, and the general 60dB:3dB bandwidth is 10:1. The test receiver's IF filter has a small squareness factor, which is typically 2:1dB for a 60dB:6dB bandwidth (typically the test receiver is a dual-tuned loop and B3=0.8B6). The spectrum analyzer has a large noise figure, typically 19 dB. Therefore, with the same frequency bandwidth, the spectrum analyzer has a higher noise level than the test receiver.

After understanding these differences, you can select the appropriate IF bandwidth based on the measured conditions and the characteristics of the measured signal. If you want to measure two adjacent signals with an interval of 25KHz, if their levels are not much different, you can distinguish the two signals with a 10KHz IF bandwidth. If the levels differ significantly, the 3kHz or 1kHz IF bandwidth must be used to distinguish the two signals. When selecting the IF bandwidth, you should also pay attention to the scan time. Too fast will cause the filter to respond too late, resulting in inaccurate measurements. Some spectrum analyzers have automatic adjustment function, especially the more advanced modern ones can automatically adjust the scanning time to match the scanning bandwidth and the IF bandwidth. If it is manually adjusted, it should be noted that once the IF bandwidth is changed, the scan time must be changed accordingly to ensure accurate measurements.

If you want to measure weaker signals, you must reduce the IF bandwidth so that the spectrum analyzer noise level is lower than the measured signal. The spectrum analyzer generally gives an average noise level below the minimum IF bandwidth. The typical value of the mid-range spectrum analyzer is -115 dBm. To ensure that the measurement results are valid, the signal-to-noise ratio should be better than 6dB, so its measurable minimum level is -109dBm or -2dBμV. The actually measurable minimum level is also affected by the spectrometer's spurious response index. When the measured signal is less than 1μV, interference introduced through the chassis and the power cable will make the measurement result unreliable.

Fourth, how to ensure the measurement accuracy Test receivers are equipped with a standard pulse oscillator, so that in the measurement state, such as frequency, attenuator, IF bandwidth changes can be calibrated at any time. The measurement accuracy is mainly determined by the accuracy of the standard oscillator and the input mismatch error, generally ±2dB

The spectrum analyzer is calibrated with a fixed frequency standard signal, which generates an error when the measurement frequency is different. At the same time, changes in the RF attenuator reference level, IF bandwidth, display scale, etc. will produce errors. For modern spectrum analyzers these errors are generally:

Calibration signal absolute error: ±0.3dB

Frequency response (including input mismatch): ±0.5~2dB

RF attenuator change: 1~2dB

Reference level change: 0.5dB

IF bandwidth change: 0.5~1dB

Display scale change: 1dB

CRT display nonlinearity error: 1~2dB

At first glance, these errors add up to more than 4.5 dB, but actually have a lot to do with the measurement method. During measurement, errors can be minimized if they remain the same as the instrument settings during calibration. The intermediate frequency alternative method is generally adopted, that is, by changing the reference level without changing the IF bandwidth and the display scale. When the calibration signal level and the measured signal level are equal to the corresponding reference levels, the measured signal level value is equal to the calibration signal level value plus the reference level change amount. It is worth noting that the signal-to-noise ratio is kept more than 12dB during measurement, and the error of this measurement depends only on the first four terms of the entire error can reach ±2dB. Of course, the same frequency of a calibration signal generator can also be used instead of the measured signal for calibration, so that the measured accuracy will be higher. 5. Measurement of various industrial interference field strengths The current spectrum analyzer displays the measured The instantaneous peak value of the signal is recommended by the national standard and internationally. The quasi-peak detector is capable of simulating the subjective characteristics of the human ear to various industrial impulse interferences and has a specified charge and discharge time constant. According to the national standard, the charge and discharge time constants of the quasi-peak detector are: 150msHz to 30MHz, 1ms and 160ms; 30ms to 1000MHz are 1ms and 550ms. The time constant of the peak detector is not clearly defined. The charging time constant is much smaller than the quasi-peak detector. Generally, the charging time constant is in the microsecond range, and the discharge time constant is in the order of milliseconds or even seconds. Strictly speaking, in accordance with the CISPR alignment peak measurement regulations, the spectrum analyzer is not fully suitable for the regulation of quasi-peak measurement. The spectrum analyzer is not fully suitable for quasi-peak measurement, but in order to expand the scope of application, the United States and West Germany R/S The company and Takeda Riken Co., Ltd. both added a CISPR measurement (quasi-peak measurement) to their spectrum analyzers as an option configuration (specify when ordering).

When using, it should be noted that according to the quasi-peak time constant, the scanning speed of the spectrum analyzer should be slow. Generally, it should be larger than 3~10s/MHz manually. Obviously, if a wide frequency range or full-band search sweep measurement is to be performed, such a slow sweep speed is unacceptable. However, spectrum analyzers with CISPR files mostly have microprocessors and automatic test functions. Therefore, as long as the measurement methods are slightly improved, measurement contradictions caused by quasi-peak charge and discharge time constants can be solved. This method is to use the peak file to scan the whole frequency band quickly, find the frequency points with the largest interference, and then use the quasi-peak value to slowly scan around these frequency points to determine whether it is qualified. These operations can generally be done with automated testing software or manually.

In addition, the spectrum analyzer with the addition of CISPR files is mostly a bench-top instrument, which has a large volume and weight, and is only suitable for use in a laboratory or a fixed station, and is not suitable for field mobile operations.

Turn-out Baler Machine

The turn-out scrap Metal Baler machine is used to compress scrap metal (steel, copper, aluminum, stainless steel, etc)into metal block or metal bale. The purpose is to save the transportation cost or reduce the raw material loss when smelting.

After compressing, the metal bale is easy to transport and store, so it can save transporation cost, reduce storage area and reduce the loss when smelting. Easy operation, durable and long service life, high efficiency, etc.

1) Hydraulic drive, compact design, great sealing;

2) Customized press box size and bale size;

3) Bale discharging ways include turn-out, side-push out and forward-out;

4) Designed with quick unit that can improve work efficiency substantially;

5) Famous motor, hydraulic system, oil cylinder, piston pump, electrical components;

6) Blades: our scrap metal baler machine with the knife, it can cut the long size material when compressing, so it very goog for the long steel wire and big size material.

Turn-out Baler, Turn-out Baling Machine, Turn-out Baling Press, Turn-out Baler Equipment, Turn-out Metal Compactor

Jiangyin Metallurgy Hydraulic Machinery Factory , https://www.jiangyinyeya.com