Infrared vs. Catalytic Bead Technology: Pros and Cons
About the author:
Keith Rhodes is director of sales and service for Oldham Americas.
“We need the highest level of protection and lowest long-term cost of ownership. Do I use infrared or catalytic bead technology to detect combustible gases?”
This is a question I hear often from customers when they’re trying to decide which technology to employ in their pursuit of the best gas detection environment for their industries.
Catalytic Bead Technology
Historically, catalytic bead technology has dominated the market. It is inexpensive to manufacture, and if properly designed, it offers excellent T50 and T90 response times to the target gases. It is a proven design that has been around for years, and it utilizes the Wheatstone Bridge principle—in layman’s terms, it essentially measures the resistance created when the active bead is exposed to gas, thereby causing the bead to rise in temperature as it burns the gas. This creates a resistance, which is then compared to the reference bead that is impervious to the atmosphere. The amount of gas concentration is linear with the resistance (the more resistance, the higher the gas concentration).
There are some advantages to catalytic bead technology. The cost of catalytic bead sensors is low, and the catalytic bead offers a versatile range of gas detection. This technology has the ability to detect most combustible gases, including hydrogen. In short, it will detect most hydrocarbon and non-hydrocarbon combustible gases. But as good as catalytic bead technology is, it has a few disadvantages that one must understand. First, catalytic bead sensors are susceptible to poisoning from silicates that can attach to the active bead over time and create a coating that will make it resistant to the atmosphere. The result is a detector that will provide a 4 mA output but will not detect gas, creating a false sense of security. This is why it’s imperative to perform quarterly, zero and span calibration.
The second common disadvantage of catalytic bead technology is oversaturation from high concentrations of gas. High concentrations of gas or levels above 100% lower explosive limit (LEL) can damage a sensor unless some method is employed to protect the catalytic bead sensor after it has gone into an over-range state. Another potential result of oversaturation is displacement of oxygen, which causes the sensor output to decrease, thereby creating a perception that gas levels are lowering when, in actuality, you have an enriched environment.
Lastly, catalytic bead sensors will need to be replaced periodically. Due to all the potential environmental risks to the sensor, catalytic bead technology demands that one must zero and span calibrate the instruments routinely to ensure they are operating properly. Oldham recommends quarterly calibrations as the minimum standard to ensure the equipment is operating properly and providing the level of protection people deserve.
Infrared Technology
While combustible infrared (IR) sensors are not new to the market, they have increased in popularity in the last five to ten years as the technology has improved. Modern infrared technology addresses most of the shortcomings of catalytic bead technology, with few exceptions. Combustible IR detectors operate on a principle of gas absorption utilizing an IR light source along with a measurement and reference detector. For this discussion, this is the base design technology, considering that there are some sophisticated optical designs and advanced algorithms used to ensure reliable results, which vary among manufacturers. In simple terms, the detectors have two wavelengths: one that will absorb gas and the reference, which will not. The signal strength is measured on the active wavelength and then compared to the reference. This information is entered into algorithms and provides a linear output of the gas concentration.
There are many advantages to this technology. First, one cannot oversaturate the sensor. High concentrations of gas have no effect on an IR detector. Second, most, if properly designed, demand fewer calibration intervals—if any. Some only require a biannual or annual zero check, with no span calibration required. Maintenance is greatly reduced with IR, but it is not nonexistent.
Third, IR detectors are immune to poisons, such as silicates or high concentration of H2S, which will damage a catalytic bead sensor. Fourth, they are highly reliable and almost failsafe. Modern IR detectors in general will inform you when there is problem, such as obscuration of the light source, light source failure or detector failure. The chances of an IR detector failing and providing a zero or 4 mA are almost nil.
The primary drawback to this technology is that IR detectors typically have higher upfront costs than catalytic bead models. Additionally, they can only detect hydrocarbons and will not detect exotic chlorinated or fluorinated hydrocarbons, for example. IR technology also will not detect hydrogen. IR detectors are essentially gas specific, and each detectable hydrocarbon has it own unique gas curve.
The output is linear for that specific gas at the full range and over a temperature curve. For a general hydrocarbon detector, one must understand all the potential gases that need to be detected and determine with a supplier if the detector will register all the gases on the safe side of the combustion curve.
Both of these technologies offer pros and cons that must be weighed with regards to the application, the environment and the cost of ownership specific to each customer. Ensure that you select the best technology for your application by choosing gas detection experts to advise and assist you.