Testimony for the Record
By George E. Parris, Ph.D.
Director of Environmental and Regulatory Affairs
American Wood Preservers Institute
Senate Committee on Environment and Public Works
October 3, 2000
Issues Concerning Risk Analysis, Use of Comparative Risk in Policy Making and and Balancing Risks, Costs and Benefits

Preface

We are honored to address the Senate Committee on Environment and Public Works. We are also honored to speak for the American Wood Preservers Institute, which represents an industry of many small businesses that has been instrumental in the economic growth of this country by making railroads, marine shipping, rural electric utilities, telecommunications and house construction economically feasible while conserving our forest resources.

The American Wood Preservers Institute strongly supports the use of quantitative risk assessment and risk-cost-benefit analyses in regulatory decision making. Without these inputs from the real world, regulatory decisions would be based on intuition and political considerations. However, the risk assessment methodology is evolving and must reflect our current understanding of science; and the cost and benefits analyses must consider both direct and indirect impacts (inside and outside the regulated community). When applied objectively, risk assessment and risk-cost-benefit analyses are the foundation of rational policy, regulations, and standards. Unfortunately, when obsolete risk assessment techniques are applied and cost and benefits are selectively chosen for consideration, politically motivated policy, regulations and standards can masquerade as being unbiased and rational.

Part I. The Role and Value of Risk-Cost-Benefit Analysis in Decision-Making

The regulations and standards for protection of human health and the environment that are promulgated to implement environmental statutes (1) must use scientifically valid methodologies, (2) must be applied consistently so that decision-makers apply finite resources in the most efficient way, and (3) must be reasonable in terms of balancing risks, costs and benefits (i.e., they must meet a reasonable threshold of utility).

The Congress (not bureaucrats or political appointees of the Executive branch) must set broad policy guidelines for (1) acceptable risk, (2) equitable allocation of finite resources to reduce risks, and (3) minimum requirements for risk reduction per unit of money spent by government or the private sector to justify action.

The wood preserving industry provides examples of the types of problems that we believe are currently common in environmental laws, regulations, standards, and enforcement policies. I will address some of these here:

Part II. The Methodology of Risk Assessment Must be Scientifically Valid

The risk of contracting fatal cancer underpins of most of our environmental standards and the public perception of risk of many activities and products. Since most of the communicable diseases that killed millions of children and adults in previous generations have been tamed by better hygiene and antibiotics, cancer has become the leading unpredictable and uncontrolled risk in most peoples' lives. In turn, quantitative risk assessment methodology has been introduced to attempt to make the risk of cancer predictable and to provide a rational basis for control of that risk.

However, the risk assessment methodology that is still used today was invented primarily between 1930 and 1970 at a time when our understanding of the nature and cause of cancer was very rudimentary. I want to briefly recount the history of our quantitative risk assessment methodology and show how it needs to be changed to reflect current scientific knowledge in biochemistry and genetics (see appendix).

It should be noted that the regulatory agencies have clung tenaciously to the original concepts in spite of growing evidence that they are inapplicable in most cases. The exercise of prudence is certainly a desirable trait in a regulatory agency, but the credibility of the regulatory agencies (particularly the U.S. Environmental protection Agency) is being eroded among knowledgeable observers because they are currently so far behind the state-of-the-art.

My industry is currently affected by EPA risk assessments for dioxins and arsenic. We believe that the methodology used in these risk assessments is fundamentally flawed (as discussed below) resulting in risk projections that are so large that the EPA is being led to actions which will unnecessarily limit the use of our products and which will cause unnecessary alarm in the general public.

Clearly Many Carcinogens Should Not be Assessed by the Target Theory

(1) The Fundamental Requirement of the Target Theory is Not Met

The fundamental requirement of the target theory (linear no-threshold [LNT] model) of mutagenicity/carcinogenicity is direct damage to the DNA independent of any biochemical system. Few chemicals have been shown to directly damage (e.g., form adducts with) DNA. Some chemicals have been shown to form adducts with DNA after being metabolized to active electrophiles or free radicals. Unless, such reactions are demonstrated in vitro, there is no scientific justification for applying the target theory (linear no-threshold model) of risk assessment.

(2) The Target Theory Ignores Chemical and Physical Modulation of Dose Efficiency

Most chemical and physical barriers that stand between a chemical as dosed and the DNA target of mutation are progressively less effective as the dose of the chemical increases. In many cases, there may be a threshold below which a chemical agent never reaches the DNA. Thus, we would expect most dose response curves to be non-linear, i.e., sub- linear.

(3) The Target Theory Ignores DNA Repair

The existence of DNA repair mechanisms means that even for direct acting genotoxic agents (e.g., x-rays), the dose-response curve should be sub-linear at low doses. Since DNA repair occurs after a damaging event, DNA repair would not require, an absolute threshold, but the quantitative difference between a true threshold (i.e., zero risk) and a sub-linear dose-response curve may be un-measurable. (4) The Target Theory Incorrectly Assumes that Mutant Cells Progress to Cancer Many cell clones in the body contain mutations, they normally do not progress to cancer because (i) they are not immortal and hence become extinct before clinically significant tumors develop or (ii) they are triggered to undergo apoptosis by proteins that scan the DNA.

Most Carcinogens are Threshold Carcinogens

There are many mechanisms through which a chemical can indirectly cause genotoxicity that leads to cancer. Even in those cases where direct genotoxicity has been observed, indirect mechanisms may be the principal contributor to genotoxicity at high doses. The importance of recognizing that fact and developing our risk assessment policy around this concept is that for those cases, there is a threshold of exposure below which the risk is effectively zero. By managing exposures (through appropriate regulation) such that the total exposure is below the appropriate threshold, a very high level of health protection can be provided with great flexibility in compliance that lead to health, environmental and economic benefits.

In practice the thresholds can be established two ways: (1) epidemiologically (bioassay and environmental) and (2) biochemically. The biochemical approach involves determining in vitro threshold of genotoxicity (it is not necessary to determine the mechanism of action) using appropriate tissue cultures and determining what the attenuation factor is between environmental medium and blood plasma (e.g., what concentration in drinking water is required to give the concentration in blood plasma equal to the genotoxic threshold determined in vitro).

Part III. The Cost and Benefit Analyses Must Consider Direct and Indirect Impacts

(1) Comparative Risk Assessment

The Environmental Protection Agency has excused itself from explicit compliance with the National Environmental Policy Act (NEPA) on the grounds that rulemakings by the USEPA are equivalent to the analysis required in an Environmental Impact Statement (EIS) under NEPA. Unfortunately, that policy has resulted in the programs implemented by the USEPA including the regulation of waste (RCRA), water (SDWA), air (CAA) and remediation of environmental contamination (CELERA) taking unexpected and undesirable directions. Basically, the USEPA has developed implementation approaches that focus on only one element of what is actually a multi-component situation. I will discuss the environmental remediation program under CERCLA (Superfund) as an example.

Basically, the decisions concerning (1) whether or not to remediate and (2) what the target final concentration of contaminant should be are driven either by compliance with applicable or relevant and appropriate requirements (ARARs) or a toxicological risk assessment that addresses the potential risk to persons who may (or may not) actually live on the contaminated land and may (or may not) actually drink the groundwater, etc. Notice that all the focus is on hazards and risk caused by the chemical (usually toxicological) in the contaminated media. Once the chemical risk (usually carcinogenic risk) to the people who are exposed (or who may possible be exposed in the future) is deemed to be above a target set by policy (e.g., 40 CFR 300.430) the process is committed to a course of action that requires remediation.

There is a major flaw in this approach. Fundamentally, the decision-making guided by the regulation disregards all other risks and impacts of the proposed action. If you look at environmental remediation projects, what you discover is that they are basically construction projects. They involve drilling wells, pumping water, excavating soil, hauling soil and debris from place to place (often on public roads), construction and operation of treatment systems, etc. Disregarding the chemical or radiological hazards to workers and the general public associated with these projects (including the potential hazards presented to people living near landfills where waste may be disposed off-site), the construction work and transportation alone represent risky operations (especially when protective clothing for protecting workers from chemical risks are factored in because they usually increase heat load, restrict vision and make the worker more awkward). Construction projects, of course, also involve environmental impacts (e.g., damage to habitat or taking of wildlife).

It would be desirable to have a more balances weighting of risk in the CERCLA program so that the presumption that remediation is always preferred can be analyzed.

Another example where narrow focus on one element of risk can produce undesirable conclusions is associated with the use of wood preservatives. For example, unnecessarily limiting the use of a wood preservative because of erroneous risk assessments might result in the harvesting of more trees (depletion of natural resources) and more accidents and deaths in the logging and sawmill industry.

(2) Balancing Risks, Costs and Benefits

There should be some reasonable analysis of costs, risks and benefits associated with regulations and standards. The wood preserving industry has provided clear examples of poor risk-cost-benefit decision-making by the USEPA. In the 1992 budget process, it was pointed out that the hazardous waste regulations for wood preservatives cost 5.7 Trillion dollars per avoided premature death! This set a record for inefficient use of our financial resources. Viewed differently, spending $2 million on highway safety saves at least one life in a few years, but spending the same amount on controlling waste from wood preserving does not save a life over a million years. Obviously, the benefits of such regulations are insignificant, while the costs are very real.

In the cases of the current debate over the regulation of arsenic in drinking water, it should be embarrassing for USEPA regulators to go to international conferences where scientists from West Bengal and Bangladesh describe real, widespread, overt clinical signs of chronic arsenic intoxication from drinking water at levels of arsenic approaching 1,000 ppb and then argue that the United States should spend billions of dollars to reduce the concentrations of arsenic in drinking water from 50 ppb to 5 ppb, when benefit of such a move are at best hypothetical. The only cases that the USEPA claims show any damage at the current regulatory standard in the U.S. are actually among people that are not covered by the current standard and would not be covered by the standard because they are on private wells.

Interestingly, USEPA cost estimates for implementation of the current proposed maximum contaminant limit (MCL) for arsenic only addresses some of the most obvious cost (e.g., treatment equipment) to the "regulated community" i.e., public water utilities. The cost of acquiring land to build a treatment plant on and the cost of waste disposal are severely underestimated or ignored by the USEPA. Moreover, entire classes of economic impacts are not even mentioned by the USEPA's proposal. For example, the MCLs are routinely adopted as applicable or relevant and appropriate requirements for CERCLA remediation and in this way, the new MCL for arsenic will affect the wood treating industry

Even private real estate values may be affected by the new arsenic MCL. Suppose that you are a private land owner with a well with 40 ppb of arsenic from natural sources. Currently if you sell your house, there is no issue about the purity of the water. But, if the MCL for arsenic were reduced to 5 ppb, you would need to disclose that your well was "contaminated" to prospective buyers. Moreover, the cost to cure this "defect" by installing a point-of-use treatment device will cost (by the USEPA's estimates) several hundred dollars each month (forever). Considering that typical mortgages (principal and interest) are $1000 to $2000per month, an additional burden of e.g., $300 per month will necessarily reduce the value of the real estate by 15-30%. It implication for property values in large parts of the western US, Michigan, Wisconsin, Minnesota and other states is staggering.

Ignore the proposed MCL for arsenic for a moment and focus on the underlying risk assessment. Heretofore, the USEPA has used similar risk assessments to estimating the risk associated with soil contamination and pesticides residues (which have exposure scenarios that have nothing to do with water). But, because the USEPA retained a drinking water standard that was consistent with a much lower level of concern about arsenic, the risk assessment per se was not burdensome. Now that the USEPA is accepting the risk assessment as the driver for regulation of drinking water, the implications for other modes of exposure are enormous. The risk assessment will be used in the waste programs, the pesticide programs, in the air programs as well as the water programs.

Finally, in the context of risk-cost-benefit analysis, I would like to report arsenic contamination of the Madison River in Wyoming. At the source, the concentration of arsenic is 360 micrograms per liter and the pollution can be traced at least 470 km downstream until the Madison River joins the Missouri River where the concentration of arsenic is still 19 micrograms per liter. I am sure that if this were caused by a wood preserving plant, our industry would be expected to pay whatever cost was necessary to stop this flow of arsenic. But, the source of the arsenic (and other toxic metals) is the Yellow Stone National Park.

Appendix

History of the Evolution of Quantitative Risk Assessment for Cancer

-- 1850._Remember that the microscope was not invented until the early 1800s and that led to the discovery of cells and the identification of chromosomes (i.e., condensed forms of DNA visible under a microscope during the cycle of cell division).

-- 1870._The concept of inheritable traits was introduced in the period 1850-1870 by Gregor Mendel (1822-1884)

-- 1890._The role of microbes in causing some diseases was not understood until 1877-1887 through the work of Louis Pasture (1822-1895)

-- 1890._Cancer was recognized as a disease in the 1 800s, but it was not until the late 1800s that exposure to specific chemical substances was associated with the increased incidence of some cancers.

-- 1910._In the period 1890-1910 the work of Paul Ehrlich established the concept of dose-response relationships for pharmacology and toxicology. Much of this research also involved arsenic (III) which was one of the few effective drugs used to treat protozoal diseases (sleeping sickness and syphilis). Invariably, for a specified biological end point (e.g. acute toxicity) there were thresholds below which no effects were observed and the responses of individual members of the exposed population tended to fall into a bell-shaped curve (normal distribution). Integration of this bell-shaped curve produces an S-shaped curve known as the dose-response relationship (i.e., probability of effects at a specific dose).

-- 1920._It was not until the period 1908-1918 that chromosomes were linked to inheritable traits and mutations by Thomas Hunt Morgan (1866-1945).

-- 1935._During the early 1900s, radioactivity was intensely studies including the effects of x-rays on tissue. It was observed that (within the range of exposure considered) the frequency of mutations caused by x-rays was linearly related to the total dose (intensity x duration) not the intensity or the duration of exposure alone. This observation gave rise to the erroneous idea that (for x-rays) intensity did not matter and only the total dose needed to be considered.

-- 1940._"Target theory" became the acceptable way to predict damage to chromosomes (and hence mutations) through bombardment with agents (such as x-rays). This was consistent with the notion held at the time that (1) chromosomes normally were never damaged; (2) if chromosomes became damaged, they was never repaired; and (3) any damage to chromosomes would result in either immediate death of the cell of a mutant cell line.

-- 1953._It was not until 1953 that the structure of DNA making up chromosomes was explained by Watson and Crick.

-- 1961._Normal cell clones were shown to be mortal (normally die after a set number of replications, i.e., generations) and cancer cell clones were shown to be immortal by Leonard Hayflick.

-- 1962._Silent Spring published by Rachael Carson provoking systematic public fear of manmade chemicals.

-- 1970._The process of DNA repair was discovered and articulated by J.E. Cleaver and others while working on the cause of the inheritable disease xeroderma pigmentosum.

-- 1970._The U.S. Environmental Protection Agency was founded and began establishing policies concerning management of risk caused by manmade (industrial) chemicals. The target theory of mutations was accepted and extrapolated in two ways (i) it was applied across the board to chemicals (not just high energy radiation) and (ii) the theory was assumed to relate to cancer as well as mutations. Note that high-energy radiation can go directly to chromosomes without passing through chemical or physical defenses present in the body. Chemical agents must pass through both chemical and physical barriers before reaching the nucleus and the DNA.

-- 1971._Recombinant DNA discovered

-- 1972._Apoptosis (programmed death and recycling) of stressed and damaged cells was articulated by Kerr, Wyllie and Currie

-- 1992._Chromosome telomere shorting during DNA synthesis related to cell clone mortality.

-- 2000._Formulation of a new model for cancer risk assessment in progress.