Answer to Question #10433 Submitted to "Ask the Experts"
The following question was answered by an expert in the appropriate field:
- Our HazMat (hazardous materials) team uses a Ludlum 14C survey meter and a Canberra Mini-Radiac dosimeter. Both display in (micro, milli, . . .) roentgens (R). Environmental Protection Agency (EPA) exposure guidelines are expressed in terms of rem. Can you recommend a field-expedient correlation of R to rem so that we can be sure to keep our exposures within EPA guidelines?
- Given that our Ludlum 14Cs can read in counts per minute (when using the pancake and scintillator probes), under what circumstances should we be interested in taking readings in counts per minute, as opposed to just mR h-1?
1. For instruments and dosimeters that are used to measure ionizing photon fields—i.e., gamma rays and x rays—it has been common in the United States to have instruments calibrated to read exposure (rate), for which the special unit is the roentgen, symbolized R. The growing tendency is to change the calibrations and readouts of such devices to attempt to interpret the quantity effective dose, in sieverts or rem. The effective dose represents the dose determined by multiplying the equivalent dose (the equivalent dose is the absorbed dose multiplied by a radiation weighting factor or quality factor) to every significantly irradiated tissue/organ in the body by a cancer or genetic risk weighting factor and to sum up all such products to obtain effective dose.
Since the effective dose cannot be measured directly with a typical survey instrument or a personal dosimeter, we use approved simulation quantities to approximate the effective dose. For survey instruments, the International Commission on Radiation Units and Measurements (ICRU) recommended quantity to use to approximate the effective dose is the ambient dose equivalent, which is the dose determined at the 1-cm depth when a specifically defined calibration field (referred to as expanded and aligned) irradiates an acceptable phantom (ICRU uses a 30-cm diameter spherical phantom). Similarly, a quantity that has been used to simulate effective dose when personal dosimeters are used is the personal dose equivalent evaluated at the 1-cm depth when an appropriate phantom is irradiated (a 30 cm x 30 cm x 15 cm deep phantom is typical).
Monte Carlo calculations have been done to evaluate ambient dose equivalent and personal dose equivalent, and work has been published to show the relationships among various dosimetric quantities—e.g., exposure, ambient dose equivalent, personal dose equivalent, and air kerma. For example, you can find these relationships and determined values in ICRU Report 47, Measurement of Dose Equivalents from External Photon and Electron Radiations, 1992. You can also find conversion factors to get from air kerma to personal dose equivalent in the American National Standards Institute/Health Physics Society (HPS) Standard N13.11 (2009) for various x-ray and gamma-ray energies commonly used in calibration testing. If you are a member of the HPS, you may download this document free of charge as shown at the bottom of the page that is addressed through this link.
If you consult such documents you will see, for example, that the conversion factor from exposure, in R, to ambient dose equivalent at 1 cm increases from about 0.009 cSv (rem)/R at 10 keV to 0.96 at 30 keV to about 1.52 at 60 keV and then decreases to about 1.06 at 600 keV, and 1.01 at 1.5 MeV. Comparable values for personal dose equivalent are 0.0085 cSv (rem)/R at 10 kev, 0.97 at 30 keV, 1.66 at 60 keV, 1.08 at 600 keV, and 0.999 at 1.5 MeV.
2. If your Ludlum 14C meter is equipped with a thin-window GM detector, as is commonly the case, the cpm scale is often useful for performing surface contamination measurements, especially for beta-emitting radionuclides and, in some instances, for alpha emitters. Through a proper calibration or by using the manufacturer's conversion factors, you can then approximate the extent of surface contamination from the count-rate measurements.
For gamma-ray or x-ray measurements, the count rate is directly correlated with the exposure rate. Thus, when equipped with the common pancake-type GM probe, the usual conversion factor is about 3,500 cpm/mR h-1 at 662 keV photon reference energy. If a reasonable approximation to secondary charged particle equilibrium exists with respect to electrons set free by photon interactions at the detector location, this conversion factor will apply fairly well over a relatively wide photon energy range. Unfortunately, one does not always know the extent of such equilibrium, and exposure rate readings with the thin-window detector are not always reliable. The possible lack of equilibrium can be diminished significantly by covering the thin window with a modest covering of low-atomic-number material (most plastics are suitable, and 0.32 cm goes a long way toward satisfying the equilibrium requirement for relatively low- to moderately high-energy photons).
If the scintillation probe you refer to is the common NaI(Tl) type, this detector provides high-gamma sensitivity, but shows a strong energy-dependent response. This makes the detector difficult to use to make exposure or dose-related measurements unless you know the specific energy characteristics of the photon field and can make proper adjustments, accounting for variations in efficiency, to translate count rate to exposure or dose rate. This usually requires feeding the signal to a gamma energy spectrometer and the use of specialized software to define the contributions of different energy photons, which is beyond the capability of the 14C meter. Thus, the 14C with an NaI(Tl) probe is a great instrument for identifying the presence of gamma-emitting radionuclides, but is generally not very useful for exposure/dose measurements. The count rate provides a qualitative and semiquantitative measure of the presence and relative significance of gamma-emitting contamination in various locations.
I hope the above answers most of your questions.
George Chabot, PhD