By Keith G. Bircher Calgon Carbon Corporation
UV reactors are traditionally validated by determining the Reduction Equivalent Dose (RED) that a reactor will achieve as a function of flow, UV transmittance (UVT), and lamp UV output under various operating conditions. RED is determined by relating the log-inactivation of a test microbe to a dose value using the UV dose-response curve of that microbe measured in the lab using a collimated beam apparatus. Because UV reactors deliver a dose distribution, the RED is dependent on the UV sensitivity of the organism being treated. For example, if a UV reactor has a relatively wide dose distribution, the RED determined using a relatively insensitive organism, such as MS2 phage, can be as much as twice the RED that is obtained using organisms that are more sensitive, such as T1 phage.
The UV Disinfection Guidance Manual (UVDGM) has dealt with this by the use of an RED Bias factor to account for the difference in the UV sensitivity of the test microbe and the target pathogen. To ensure public health protection, the RED Bias that is mandated by the UVDGM is based on a near worst-case UV reactor - that is, one with a relatively wide dose distribution. This conservative approach handicaps UV reactors with narrow dose distributions and by extension costs end users money.
As described in the UVDGM, the RED determined in validation can be modeled as a function of the flow rate, the UV sensor value, and the UVT transmittance of the water. This reactor specific algorithm can then be used for UV dose monitoring by the reactor during operation at a water treatment plant. The algorithm, however, does not account for the UV sensitivity of the test microbe.