Both the great Truths and the great Falsehoods of the twentieth century lie hidden in the arcane, widely inaccessible, and seemingly mundane domain of the radiation sciences

Monday, January 25, 2010

Background Reading: 2


What follows is an excerpt from my book A Primer in the Art of Deception: The Cult of Nuclearists, Uranium Weapons and Fraudulent Science (www.du-deceptions.com). It provides background information that will allow the reader to follow what is to come in future postings. It is reproduced from the chapter entitled Radiation Safety in Its Infancy: 1895-1953


Radiation Safety After the War


While the Second World War was being fought, the work of both the US Advisory Committee on X-Ray and Radium Protection and the ICRP lapsed into inactivity. During their absence from the scene, the nuclear sciences underwent a revolution. The meaning and implications of “radiation safety” before the war had little to do with the new realities in existence by war’s end. In the 1930s, issues of radiation safety revolved around establishing exposure limits, primarily to patients and medical personnel. In the post-Manhattan Project world, radiation safety had to encompass the burgeoning nuclear industry as well as potential exposure to the entire population by radioactivity released into the environment. These new realities reinforced the implication, inherent in the concept of “permissible dose,” that what was deemed an acceptable risk was a judgment call made by members of regulatory agencies, and that members of society had to accept an element of risk to their own health for nuclear technology to flourish. Defining exactly what constituted acceptable risks to the general populace, however, was never a topic of public debate. It was left in the hands of those few charged with developing radiation protection standards who, needless to say, were people directly involved in the development of weapons of mass destruction or who were intimately associated with such people. And it is in their hands that radiation safety has remained up until today.


With the quantum leap in the amount of radioactive material present in the human domain after the war, new standards of safety were urgently needed, but a temporary void existed as to what organization would develop them. The Atomic Energy Commission came into being on August 1, 1946, and took charge of all the facilities and all of the nuclear materials of the Manhattan Project. Twenty days later, Lauriston Taylor revived the US Advisory Committee and began a vigorous campaign to have that organization recognized as the voice of authority on radiation protection in the United States. Taylor’s advocacy succeeded. The first meeting of the Committee was convened with the intention of initiating revision of the National Bureau of Standards Handbook 20, X-ray Protection. At that meeting, the decision was made to adopt a new name, the National Committee on Radiation Protection (NCRP). (When the NCRP became a US Congressional Charter Organization in 1964, its name changed again to the National Council on Radiation Protection and Measurements.) The decision was also made that membership should be extended beyond those with an interest in the medical application of radiation to include representatives from all organizations that had a vested interest in furthering standards for radiation protection. When reformed, membership on the committee consisted of eight representatives from various medical societies, two from manufacturers of x-ray equipment, and nine from government agencies including the Army, Navy, Air Force, National Bureau of Standards, the Public Health Service, and the Atomic Energy Commission. As time passed, the NCRP evolved into an organization of tremendous influence. The recommendations it propounded, along with those of the ICRP, became the basis of federal, state, and local statutes for managing radiation hazards.


From the outset of their formation, a codependent relationship developed between the Atomic Energy Commission, the agency that managed the nation’s nuclear program, and the NCRP, the organization which recommended standards of safety. Soon after the formation of the two organizations, the AEC began exerting pressure on the NCRP to formulate permissible dosages for workers in the nascent nuclear industry. Not only was this required to ensure worker safety but to protect the AEC from future liability. To legitimize the conditions in their facilities, the AEC was in need of backing from a respected scientific organization that had all the appearances of being independent. At the same time, it had to assure that standards of safety were not set so stringently that they would hamper the development of the nation’s nuclear program. To seduce the NCRP into providing these services, the AEC first offered to accord the committee semiofficial status as a regulatory body if it would quickly publish standards. This offer was turned down. According to Taylor, the AEC then promised financial aid “‘after we had demonstrated that we could do something for them’” [1]. Despite the desire to maintain appearances of being an independent agency, the NCRP was in a hopelessly incestuous relationship with the AEC. Half its members were government representatives. A great deal of the information it required to carry out its work was classified as top secret and access could only be attained through AEC clearance. And the AEC was the chief beneficiary of the committee’s work. Further, the NCRP was not able to maintain its financial independence. The AEC footed the tab for part of the NCRP’s administrative and travel expenses.


In the years that followed its initial establishment, the NCRP received funding from many other sources. Karl Morgan, a health physicist during the Manhattan Project and participant on the NCRP, was outspoken on the influence these sources had on the development of radiation protection standards:


"A cursory glance at the National Council on Radiation Protection (NCRP), which set radiation protection standards in the United States, sheds light on whose hand fed those who set levels of permissible exposure. Past sources of income for the NCRP included the DOE [Department of Energy], Defense Nuclear Agency, Nuclear Regulatory Commission, US Navy, American College of Radiology, Electric Power Institute, Institute of Nuclear Power Operations, NASA, and the Radiological Society of North America. In truth, the NCRP relies upon the nuclear-industrial complex for most of its funding other than income from publication sales. Trust me, this fact does not escape NCRP members when they set standards for radiation exposure" [1].


When the NCRP got down to work after the war, their first order of business was to establish new radiation standards and to formulate policies for the new nuclear industry, on such matters as safe handling of radioactive material, environmental monitoring, the disposal of radioactive waste, and so forth. To pursue the necessary lines of research, eight subcommittees were established. In this way, many former scientists of the Manhattan Project came on board as advisors to the establishment of safety standards. The most important of the subcommittees formed were Subcommittee One, charged with reevaluating the currently accepted standard for radiation received external to the body by x-ray and gamma ray exposure, and Subcommittee Two, whose agenda was to formulate new standards for internal contamination by the plethora of radionuclides that had been born into the world in the nuclear reactors of the Manhattan Project.


Subcommittee One was headed by Gioacchino Failla, a physicist at Memorial Hospital in New York. The work of this committee focused on the accumulating evidence that the 1934 tolerance dose of 0.1 roentgen (0.1 rem) of x-ray/gamma irradiation per day was too high. By the end of 1947, Failla’s committee recommended that the dose for external exposure be cut in half to 0.05 rem per day, with the maximum permissible dose for a week readjusted to 0.3 rem. Before the official adoption of this new standard, Taylor queried the nuclear industry as to whether or not the new standards would in any way impede their program. The answer they gave is most telling of the philosophy of the NCRP:


"Ultimately, the committee settled on a figure that the nascent nuclear industry would accept. 'We found out from the atomic energy industry that they didn’t care [if we lowered the limit to 0.3 rem per week],” explained Lauriston Taylor. “It wouldn’t interfere with their operations, so we lowered it'” [1].


The problem of developing standards for isotopes undergoing radioactive decay inside the human body was an entirely different problem from merely revising the standards for external exposure and required much more time. Prior to the Manhattan Project, the possibility of internal contamination to humans was limited to select, small populations and only by a few radionuclides. Radium was used in medicine and industry. Uranium and radon were a hazard to miners. With the discovery of artificial radioactivity in 1934 and the development of the cyclotron, radionuclides that did not occur naturally on the earth began to be produced and used in biomedical research. The Berkeley cyclotron was the primary source of artificially produced radionuclides for civilian research prior to and during World War II. When the Manhattan Project was well under way, radionuclides for research were also being produced secretly in the nuclear reactor in Oak Ridge, Tennessee, and purified there at Clinton Laboratories. In order to maintain the secrecy of their origin, these radionuclides were shipped first to Berkeley and from there distributed to labs throughout the country. In 1946, the newly established Atomic Energy Commission initiated a program promoting peaceful applications of the atom and openly offered the radionuclides produced in Oak Ridge to interested scientists. As intended, easy availability rapidly accelerated research. In the first year, 1,100 shipments of radionuclides were shipped from Oak Ridge to 160 research centers. Two years later, Abbott Laboratories also began distributing radioisotopes. The ensuing research delineated the physical characteristics of each radionuclide and the behavior of each when introduced into animal and human subjects. Medical researchers sought for any clue in their studies that would indicate the possible usefulness of a radionuclide in tracer studies, diagnostics, or treatment. The sudden proliferation of novel radionuclides created an urgency for the establishment of safety standards for each internal contaminant. This was a major focus after the war for the advancement of radiation protection.


All the information furnished in this chapter up to this point has been required background material and preparation for understanding the work conducted by Subcommittee Two. This committee was charged with the setting of radiation protection standards for radioactive material deposited in the interior of the human body through inhalation, ingestion, absorption, or uptake via skin lesions and wounds. Subcommittee Two pursued its work with the utmost integrity and succeeded in creating a system, expedient at the time, for establishing safety standards for internal contamination. Only many years later was their work subverted and transformed into a system of lies to cover up the true hazards to life produced by the release of radioactivity into the environment.


Subcommittee Two was chaired by Karl Morgan, who later presided for fourteen years over the committee on internal emitters for the ICRP. Morgan worked as a health physicist at Oak Ridge during the Manhattan Project and was employed there for twenty-nine years after the war. He cofounded the Health Physics Society and served as its first president. He is frequently referred to as the “father of health physics.” In his later years, he became a controversial figure. He openly spoke out about the increased risks from unnecessary medical x-rays and advocated cutting the accepted standards for permissible radiation dosages by half. The nuclear establishment labeled him a “rogue physicist” and marginalized him. He is quoted as having said: “I feel like a father who is ashamed of his children.”


When Subcommittee Two first met in September 1947, the challenge facing its members was daunting. Hundreds of novel radionuclides that had never before existed on the face of the earth, at all or in appreciable quantities, were being created en masse in the nuclear reactors that were producing fuel for atomic bombs. These same radionuclides were being created in the fireballs of atomic bomb detonations and scattered throughout the biosphere. Virtually nothing was known about their behavior once they gained access to the interior of the human body. Each possessed its own unique half-life. Each decayed in a unique manner. Each emitted different combinations of alpha, beta, and gamma radiation, and the energies transmitted by these radiations varied from one radioisotope to another. Each demonstrated a unique pattern of distribution throughout the body. Each showed a preference for an organ or tissue where it tended to accumulate. Each had its own rate of absorption, retention, and elimination. As a consequence of these factors and many others, each radionuclide presented its own unique toxicological and radiological hazard. What further complicated understanding was the problem of how to assess the combined hazard to a victim when more than one radioisotope was incorporated into the interior of the body at the same time. The major conundrum facing Subcommittee Two was how to proceed.


As a model for success in their endeavor, the committee had before them the example of radium. But therein lay the problem. The first standard for a permissible body burden of radium was not formulated with any scientific accuracy until well over forty years after that radionuclide’s initial discovery. This successful standard was based primarily on direct observation of internally contaminated individuals who later developed overt symptoms of disease or signs of injury. Once such a person was identified, the quantity of radionuclide taken up within their body was established and then compared to that of other individuals who lived or worked in a similar situation but who had internalized less and remained unharmed. By this means, estimates could be derived as to what levels of internal contamination were presumably safe. As further data accumulated, these initial judgments could be adjusted as required. This same approach worked for establishing the first standards for uranium and radon inhalation in mines. There was also reliable information, again derived from direct experience, about radium-224, used for therapeutic purposes in Germany between 1944 and 1951, and thorium-232, known as Thorotrast, used between 1930 and 1950 in patients to produce better contrast in x-rays. In addition, there were the human radiation experiments involving plutonium.


The members of Subcommittee Two recognized that standards for all the new radionuclides created by nuclear fission could not possibly be derived by direct observation. Data on the physiological effects in humans of many of these radionuclides was completely lacking. Sufficient animal studies had not yet been performed. Comparison of effects to known radioisotopes was possible only in a limited number of cases. Years, if not decades, of research would be required to generate the vast amount of required information on the physical, chemical, and biological behavior of each radioisotope. Such a task would be monumental. Yet standards were needed quickly to offer guidelines for protection of workers in the nuclear industry. Some other approach was required for zeroing in on what constituted permissible levels for internal contaminants.


During the war, Karl Morgan and other physicists and medical personnel of the Manhattan Project had made first steps in developing a new methodology for calculating dosages for internal emitters. By the war’s end, they had succeeded in calculating the dose of radiation for seventeen radioisotopes in various chemical forms that would be delivered to the tissues they were likely to be deposited in once internalized. The methodology for these calculations was further developed after the War at three conferences on internal dosimetry held in 1949, 1950, and 1953. These meetings came to be known as the Tri-Partite Conferences in reference to the attending representatives who came from the three countries that had worked closely during the war in the study of radionuclides, namely Canada, the United Kingdom, and the United States. Many who attended these conferences were former participants in the Health Division of the Manhattan Project and later were members of Subcommittee Two. This is both interesting and important. The foundation of today’s approach to internal contamination by radionuclides was forged by the subculture of physicists and medical personnel who built the first atomic bomb. Their mentality and orientation toward radiation safety evolved while they were immersed in fabricating weapons of mass destruction. While supporting the development of a weapon for the annihilation of masses of humanity, they simultaneously occupied themselves with developing safety standards to protect the world from the menace they were creating. In the postwar world, these same individuals entrusted themselves with becoming the guardians for all of humanity in their prescription of what constituted a permissible dose of radiation. This is an excellent example of the genocidal mentality referred to elsewhere in this book. To a healthy mind, true radiation safety would entail refraining from building weapons of mass destruction altogether.


The scientists participating in the Tri-Partite Conferences built upon the existing methodology for calculating the dosages for internal emitters and carried it further. What they created was a “computational system” based on mathematical modeling. This computational approach allowed them to calculate dosages from internal emitters and permissible levels of exposure without having to rely on direct observation and experimentation. In ensuing years, as new experimental findings and data from direct observation became available, this information was fed into the system to further refine and improve it. The methodology relied upon today by the agencies setting standards for internal emitters use this same computational approach, with updated modifications, to determine for the public what constitutes a permissible dosage of radiation emitted by radioactive atoms gaining entrance into the human body.


Many of the participants of the Tri-Partite Conferences later served on Subcommittee Two of the NCRP. These same people sat on a similar subcommittee studying internal emitters for the ICRP which Lauriston Taylor was instrumental in resurrecting in 1953. This is how the computational approach took root in these two agencies. The results of the Tri-Partite Conferences were transplanted into the NCRP and then into the ICRP, and these organizations became a clearing-house from which information about radiation safety was distributed throughout the world.


For the computational system to be effectively applied, a great deal of background data had to be assembled. First, the physical properties of each radionuclide had to be determined. The most important of these was the rate of decay, the type of radiation each emitted (alpha or beta plus the gamma ray that frequently accompanied each decay), and the energy this radiation would transfer to the organ of retention. As mentioned earlier, each type of radiation created different degrees of biological effect, and this information was included in establishing the quantity of energy each decaying atom would transmit to its surroundings. Also necessary was knowledge of the behavior of each radionuclide once introduced into the body. Of particular importance was the retention kinetics of each: where did it go, how long did it stay, and over what period was it released. Numbers were also needed to represent the fraction of the radionuclide that passed from the gastrointestinal tract or lung into the blood, the fraction in the blood transferred to the critical organ, the fraction passing into the critical organ compared to the remaining fraction in the total body, and the fraction of that taken into the body that actually was retained in the critical organ. By knowing such patterns of distribution, calculations could be made to determine the dose delivered by each radionuclide to each organ or tissue and its maximum permissible body burden.


For the computational approach developed by the Tri-Partite Conferences to be applicable for all radioisotopes in all human beings, it was necessary to formulate a conceptual model of the human body that would be representative of all people. This model became known as “Reference Man”, or more commonly, “Standard Man”. This ideal human was “regarded as weighing 70 kg, being 170 cm high, between twenty and thirty years old, a Caucasian of Western European habit or custom and living in a climate with an average temperature of 10o to 20o” [2]. The inclusion of information on custom and climate was to set parameters for average water intake and typical diet. The tissues of the body of Standard Man were considered to have an average density equivalent to that of water. Basically, Standard Man was conceptualized as a 70 kg mass of water. An average mass for each organ in the body was derived mathematically and conceptualized as a smaller mass of water residing within the larger mass of water.


The successful application of the computational system for deriving safety standards hinged on a knowledge of how much radiation each organ or the body as a whole could be exposed to without causing any ill effect. With no prior knowledge of the behavior of the majority of radionuclides once inside the body, how was determination of a permissible dose possible? Members of Subcommittee Two were forced to rely on the vast body of knowledge that had accumulated over previous decades of the body’s response to x-rays, i.e., EXTERNAL RADIATION. To quote Radioactivity and Health: A History:


"It should be noted that no cognizance is given in the system [computational system] to the nature of the biological effect being protected against. The limiting dose rate was determined by groups espousing basic radiation protection criteria. They arrived at their conclusions largely on the basis of work with external radiation sources [iemphasis added], except for the bone seekers. They applied their best judgment to the biological data and set exposure levels for the most sensitive functions [2].


The phenomenon of electromagnetic energy interacting with matter is what Manhattan Project scientists used for formulating a general model of what transpires when any type of radiation interacts with matter. So effective was their conceptualization in explaining the impact of x-rays and gamma rays on the body that they did not hesitate to apply the same model for explaining the biological impact of alpha and beta particles plus gamma rays released in the interior of the body by decaying radionuclides. They carried this thinking into the Tri-Partite Conferences after the war and made it a cornerstone of the computational approach for determining dosages of radiation delivered by internal emitters. The validity of the entire model of radiation effects in man that they were constructing hinged on the validity of the foundational assumption that the biological effect of internal radioactive decay could be modeled on the biological effect of external irradiation.


After a half century of radiology, a substantial body of knowledge had accumulated about the effects to different organs, and the body as a whole, from different quantities and intensities of x-rays delivered at different rates from the exterior of the body. Based on this experience with external radiation sources, those working on the problem of internal emitters assigned a maximum permissible dose and dose rate to each organ of the body of Standard Man. The assumption was then made that each organ could safely absorb the same quantity of energy delivered from decaying radioisotopes embedded in the organ as it could safely absorb from x-rays delivered from outside the body. To the thinking of the time, what was important was the amount of energy delivered. For the computational system to work, what was required was a knowledge of how much energy was being deposited per unit mass of tissue under consideration. It was this point of view that allowed members of Subcommittee Two to base their work on internal emitters upon the previous research on external irradiation.


A simplified, hypothetical example will suffice to illustrate the type of calculations being performed in the absence of direct observation and research on the behavior of each radionuclide once inside the body. Suppose the permissible dose from exposure to x-rays has been established for an organ. This quantity represents the amount of energy that can be transferred to the atomic structure of that organ with no manifestation of any ill effect. That knowledge is then used as a baseline for calculating what quantity of a particular radionuclide could be taken up by the organ without manifesting any signs of injury. To simplify the kinetics involved, the assumption was made that the internal contaminants were distributing the energy emitted from radioactive decay throughout the entire organ. In this way, an equivalency was visualized between external and internal radiation. Each form of radiation was delivering the same quantity of energy to the same mass of tissue. Consequently, there was no reason not to apply what was known of external irradiation to the problem of internal radiation. Although in time a host of modifying factors were introduced to account for differences in the way the different types of radiation were delivered and the type of biological effect each produced, these had no effect in displacing the fundamental assumptions that the transfer of energy was the essential characteristic of the interaction of radiation with the human body and that the energy delivered to an organ could be treated as if it were evenly distributed throughout the mass of that organ.


To return to the work of Subcommittee Two, once permissible dosages were calculated for each radionuclide, secondary standards were mathematically derived for the maximum permissible concentration of each radionuclide in air and water. The need for these safety standards was based on the idea that the only way to prevent a person from accumulating a hazardous dosage of internal emitters was to control the environment in which the person worked or dwelt in so as to limit hazardous accumulation of the radionuclide(s) in the air being breathed and in the food/water being ingested. A person dwelling in an environment where air and water did not exceed the maximum permissible concentrations would not accumulate levels of the radioisotope that would deliver a dose of radiation greater than the permissible dose. A working lifetime was considered to be 50 years. Intake for each radionuclide was presumed to happen continuously, either for a work week of 40 hours or continuously throughout a week’s 168 hours. Limits were then established for the maximum permissible concentration for each radionuclide in water and air so that a worker exposed to these levels would never accumulate the maximum permissible dose to an organ over his working lifetime or at a rate that presumably would be hazardous.


In a nutshell, this is the computational method developed at the Tri-Partite Conferences and used by Subcommittee Two in establishing permissible limits for internal emitters. Although undergoing extensive revision over the years as new information became available, this mathematical approach to calculating permissible dosages still forms the backbone of radiation safety today. It is Health Physics 101. It is unquestioned orthodoxy in regards to the proper way of calculating the radiation transmitted to biological structures from internalized radioactivity.


For the non-specialist struggling to make sense of the technical material just presented, a single image is all that is required to follow the essence of the discussion. Visualize a person inhaling some quantity of a radioisotope. Microscopic particles of that radioisotope pass into his bloodstream and by metabolic processes within the body are transferred to the critical organ where they subsequently become lodged for a period of time within the cells of that organ. While retained there, some of the atoms undergo radioactive decay and radiate alpha or beta particles, depending on the isotope, and usually an accompanying gamma ray which can be visualized as a photon, a massless packet of energy. The energy transmitted by the nuclear particles and the photon for each radioisotope are known physical quantities as is the rate of decay for each radionuclide. Standard Man provides a reference for the mass of each organ. As the energy of radioactive decay is emitted, that energy is transferred to the electrons of the atoms making up the cells of the organ of deposition. If an estimate can be made of the amount of the radioisotope initially inhaled, the computational method can be used to calculate the amount of energy transmitted to the molecular structures making up the organ. The assumption is made that that energy is uniformly distributed to the mass of the organ, and by this means, the organ dose can be determined.


The original intention of Subcommittee Two, formulated in 1947, was to recommend maximum permissible concentrations in air, water, and the human body for twenty biologically significant radioisotopes. When their final report was published in 1953, and a similar report published by the ICRP in 1955, values had been calculated for 96 radioisotopes. Work continued throughout the decade, and both committees published comprehensive reports in 1959 which included information on approximately 215 radionuclides and 255 values for maximum permissible concentrations.


The work of Subcommittee Two was a milestone in human understanding. It provided a relatively simple methodology for quantifying dosages of radiation delivered to the interior of the body by radioisotopes. Further, it established urgently needed standards of what might constitute nonhazardous levels for a variety of radioisotopes. The new guidelines provided the framework for all future animal and human studies into the toxicology of radioactive materials. Subsequent study began to demarcate what dosages of each radioisotope were necessary to produce detectable alterations at every level of biological systems from the molecular to the cellular to the histological to the systemic. With protection standards in place, researchers could work in apparent safety in the development of such disciplines as nuclear medicine, radiation therapy, and radiobiology. Then as now, what remained a fundamental priority was to validate the accuracy of the computational system to determine whether or not it successfully modeled the actual biological impact of internalized radioactivity.


Before concluding this brief history of the development of radiation protection standards for internal emitters, one final point needs emphasis. Every living creature on the earth requires protection from mankind’s experimentation with radiation. Without debate, this responsibility was assumed by the NCRP and the ICRP. These institutions were never truly separate or independent, and the membership of both heavily overlapped. Lauriston Taylor was deeply involved in the establishment of both organizations. Gioacchino Failla and Karl Morgan were chairmen for the subcommittees on external and internal radiation for both the NCRP and the ICRP. Other US representatives to the ICRP were also members of the NCRP. As a result of this cross-pollination, no opportunity ever existed for an alternative point of view to evolve in regards to what constituted radiation safety and what was judged to be permissible exposure.


"The Chair of the NCRP, Lauriston Taylor, was instrumental in setting up an international version of the NCRP, perhaps to divert attention from the clear evidence that the NCRP was associated with the development of nuclear technology in the USA and also perhaps to suggest that there was some independent international agreement over the risk factors for radiation" [3].


"Taylor was a member of the ICRP committee and the NCRP Chairman at the same time. The NCRP committees One and Two were duplicated on the ICRP with the identical chairmen, Failla and Morgan. The interpenetration of personnel between these two bodies was a precedent to a similar movement of personnel between the risk agencies of the present day. The present Chair of the ICRP is also the Director of the UK National Radiological Protection Board (NRPB). The two organizations have other personnel in common and there are also overlaps between them and UNSCEAR [United Nations Scientific Committee on the Effects of Atomic Radiation] and the BEIR VII committee [Biological Effects of Ionizing Radiation Committee, originally funded by the Rockefeller Foundation in 1955, and now organized under the auspices of the National Research Council of the National Academy of Sciences.] This has not prevented the NRPB from telling the UK’s regulator, the Environment Agency, that UNSCEAR and ICRP are ‘constituted entirely separately’, a statement which the Environment Agency accepted. Thus credibility for statements on risk is spuriously acquired by organizations citing other organizations, but it can be seen as a consequence of the fact that they all have their origins in the same development and the same model: the NCRP/ICRP postwar process. This black box has never been properly opened and examined" [3].


The NCRP/ICRP black box is impenetrable. The public has no access into the hearts of those who have served on these committees, the discussions that have gone on behind closed doors, the compromises that may have been made in radiation safety for the benefit of government nuclear programs and the nuclear industry. However, the international radiation protection agencies have left within the public domain a penetrable artifact of their true intentions and their true allegiances, i.e., their system of evaluating the risks of radiation exposure and their standards of what constitutes a “permissible” dose of radiation. As this book loudly proclaims, by their deeds you will know them. You will know them by the fruits of their deeds. The reach of the Cult of Nuclearists and the services performed on their behalf by the radiation protection community is unmistakably written within the system currently relied upon to evaluate the hazards of internal contamination. Through a study of this system, glaring flaws become evident, intentionally left uncorrected to serve the political agenda of covering up the true impact to health from radiation released into the environment.


Bibliography


[1] Caufield C. Multiple Exposures: Chronicles of the Radiation Age. Toronto: Stoddart; 1988.

[2] Stannard J.N. Radioactivity and Health: A History. Springfield, VA: National Technical Information Service, Office of Scientific and Technical Information; 1988.

[3] European Committee on Radiation Risk (ECRR). Recommendations of the European Committee on Radiation Risk: the Health Effects of Ionising Radiation Exposure at Low Doses for Radiation Protection Purposes. Regulators' Edition. Brussels; 2003. www.euradcom.org.