“Outside-the-Box” Technologies, Their Critical Role Concerning Environmental Trends, and the Unnecessary Energy Crisis

 

 

A Compilation of

 

Briefing Papers Prepared For:

 

The U.S. Senate Environment and Public Works Committee

 

 

 

 

Background:

 

The briefing was requested by Senator Smith (R-NH and Chair of the EPW) and Mr. David Conover (Chief of Staff-EPW) because of the need to look at energy and technology  issues over times scales of 5-20 years.  The briefing was organized by Dr. Theodore Loder and was held on Oct. 18, 2000 in the Senate Dirksen Building, Washington, DC.

 

 

 

 

Further information may be obtained from:

 

Dr. Theodore C. Loder III

Institute for the Study of Earth, Oceans, and Space

University of new Hampshire

Durham,  NH  03824

ted.loder@unh.edu

603-862-3151


Background to the Briefing

 

The Issues:

 

Our present methods for solving current environmental problems are only partially working, because they attempt to solve the result of a problem and not get to the root causes of why a particular problem has occurred.  Most of our problems stem from energy issues and our tremendous dependence upon fossil fuels, especially in the transportation and power generation sectors.  In addition, increasing populations worldwide and the desires of second and third world countries to obtain what we in the US take for granted spells increasing worldwide environmental problems coupled with significantly increased oil/gas prices.  In summary, the risks associated with our present course are ever-increased environmental degradation coupled with a significant long lasting economic downturn, recession or depression.

 

As a world community, we must realize that we will need the last remaining decades of fossil fuels to create and integrate new energy sources without losing the momentum of our developing world society.  In 10-20 years from now, we have to be at a point in our global development where we are no longer dependant on fossil fuels for our energy generation and we want to arrive there by a route that does not create global environmental and economic chaos.

 

 

The purpose of this briefing was to show that:

 

1. We have growing environmental problems that will have major economic impacts.

2. There are technologies, presently being repressed, that are real and could replace the present fossil fuel usage with the appropriate investment in research necessary to bring them on line.

3. There are scientists ready to testify at a Senate hearing on the realities of these issues.

4. The need to move ahead is very urgent because the time necessary to implement the use of these technologies may take the better part of this decade and neither the environment nor the economics of fossil fuels can wait any longer.

 

The goal is not to push any specific type of technology that will “save the world”, but to convince those attending that there is a whole set of new technologies that are waiting in the wings which will change the way we live on this planet for the better.

 

 

The Briefing presenters and topics covered included the following:

 

Dr. Theodore Loder, Convener and overview of the issues and urgency

Dr. Steven Greer, Implications of the implementation of non-polluting free-energy devices

Mr. Thomas Valone, Present energy issues, energy devices and patent office issues

Dr. Paul LaViolette, Physics reassessment and anti-gravity research

Dr. Scott Chubb, Cold fusion, scientific responsibility

Dr. Eugene Mallove, Cold fusion, scientific response and patent office issues

Dr. Thomas Bearden, Physics reassessment, the world energy crisis, and “free energy device” technology


Table of Contents

 

The Briefing Papers:

 

“Comparative Risk Issues” Regarding Present and Future Environmental Trends –

Why We Need to be Looking Ahead Now!  by Dr. T. Loder * ………..………         3

 

New Energy Solutions and Implications for the National Security and the Environment:

A Brief Overview for the US Senate  by Dr. S. Greer……………………….…       8

 

The Right Time to Develop Future Energy Technologies 

by Dr. T. Valone  …………………………………………..…………….…...…     12

 

Future Energy Technologies

by Dr. T. Valone  ……………………………………………………………..…     17

 

Moving Beyond the First Law and Advanced Field Propulsion Technologies

by P. LaViolette  …………………………………………………………………    24

 

Accountability and Risk in the Information Era: Lessons Drawn from the

“Cold Fusion” "Furor"  by Dr. S. Chubb  ………………….………….…..…           29

 

The Strange Birth of the Water Fuel Age:  the Cold Fusion “miracle” was no Mistake 

By Dr. E. Mallove  ………………………………………………………… 31

 

The Unnecessary Energy Crisis: How to Solve It Quickly

by Dr. T. Bearden, LTC, U.S. Army (Retired)……………………………….….      32

 

 

* A short biography for each author follows each paper.


“Comparative Risk Issues” Regarding Present and Future

Environmental Trends –

 

Why We Need to be Looking Ahead Now!

 

Prepared for:  Senator Bob Smith and Aby Mohseni, Senate Committee on the Environment and Public Works,  revised 10/6/00

 

Prepared by:  Dr. Theodore Loder, Institute for the Study of Earth, Oceans, and Space,  UNH, Durham, NH  03824 ted.loder@unh.edu  603-862-3151

 

Introduction:

Fundamentally, our present methods for solving current environmental problems are only partially working, because for the most part they attempt to solve the result of a problem and not get to the root causes of why we have a particular problem in the first place.  It is somewhat akin to mopping the floor to fix a leaky roof.  Most of our problems stem from energy issues and our tremendous dependence upon fossil fuels, especially in the transportation and power generation sectors.  For example, the acid rain problem, unhealthy urban atmospheres, and global warming all arise from this fossil fuel dependence.  The present MTBE crisis affecting our water supplies is the result of a well-intentioned attempt to reduce air pollution in gasoline engines.  Each of these issues will continue to have a greater and greater economic impact on our country through increased cleanup and health costs.

 

Why our present course is inadequate –An example from the

automotive sector

A simple analysis of numbers from the automotive sector tells us why we will continue to have problems (both in the US and world wide) and why small percentage increases in fuel efficiency will have little real effect in the long run.  Increasing populations worldwide and the desires of second and third world countries to have what we in the US take for granted spells continuously increasing environmental problems.  For example, by the late 1990’s there were about 500 million cars world wide with an annual production of a little less than 40 million.  At the present rate of growth, there will be about 1 billion vehicles worldwide by the year 2025.  Presently there is about one car per 12 people on a global basis and about 1 car per 1.3 people in the US.  Why is this a long-range problem?

 

As the result of increased global wealth and desire for automobiles world wide, no matter what we do to improve efficiency, increases in carbon dioxide from this source will continue with its attendant global warming (1), etc.  Hybrid automobiles could help, but we must look at a second set of numbers from the US to understand impacts.  There are over 200 million automobiles in the US and we manufacture approximately 20 million per year.  Because of the “replacement lag,” it would take 10-15 years to replace existing cars, especially since some production goes towards increasing the pool.  Furthermore, there is a phase–in period for any new technology, the time needed to go from development to manufacturing to sales.  This will add years to the replacement cycle.  Thus even if we start today, implementation of a totally non-polluting technology useful for transportation would take the US circa 15 years to replace our present fleet.  It could occur faster in third world countries because of the technology leapfrog phenomenon.

 

We have similar problems with power generation in the US.  We have dammed most easily dammable rivers and there is even a movement to remove some of the dams.  Furthermore, it is presently nearly impossible to build more nuclear power plants and we are starting to shut some of them down.  Changing any of this infrastructure could take one to two decades as well.

 

In a world where our petroleum supplies will become scarcer and more expensive within a few decades or less, we need to start our planning and acting now.

 

Where we are heading and the risks of our present course.

Under our present direction we are increasing fossil fuel consumption and commensurate carbon dioxide release at an ever increasing rate.  The risks associated with our present course are both environmental and economic.  There will be seriously increased degradation of our environment including increased loss of plant and animal species, increased habitat loss such as rainforests and coral reefs, increased human suffering through disease and lowering of life quality, increased global warming (1) causing major problems through climate pattern changes and sea level rise with commensurate loss of high valued coastal real estate.  The trends for all these changes can observed today and all have varying degrees of economic impact.  However, a more direct economic impact, which will be felt by everyone, is the ultimate decline of “cheap oil.”

 

Gregg Esterbrook, in a recent article (2) discusses the world’s estimated oil reserves.  Based on industry estimates, he suggests that there are estimated “proven reserves” of 1,000 billion barrels of oil which only represents a 25 year supply at our present rate of consumption with its 2% annual increase.  He states, “Whatever number is correct, the world has decades of oil ahead.  What it may not have is decades of cheap oil.  Once the production peak comes and reserve levels begin to dwindle, the supply/demand equation may shift quickly toward higher prices.  The debate, then, centers on how soon the peak will be reached.”  Estimates are that the peak will be reached by 2010.  At present, the global oil trade depends on OPEC for about 42% of its oil consumption which could hit 50% by 2009.  If OPEC’s reserves turn out to be inflated as some in the industry believe, then the world oil production peak may occur much sooner with a subsequent sharp hike in prices.  This is just barely within our time framework for introducing new technologies if we start now.

 

Finally, Esterbrook states, “ … America has two basic choices: Begin investing in new energy forms, staying a step ahead of OPEC and smoothing the likely transition, or wait till the next crunch hits and accept another oil-induced recession.”

 

It should be obvious that an essentially permanent hike in oil prices will have a major economic impact on our country, a country where 98% of food is based on fossil fuels and the average food travels 1700 miles to the consumer.  The slight rise in fuel costs last winter and the problems truckers had with fuel costs and homeowners had with heating oil costs are just a glimpse at the issues leading to a major economic turn down.  The “gas crisis” in Europe this summer is also an indicator that these problems are not limited to the US.

 

One can describe our present situation as if the environment and the world’s population were in a barrel on the river heading towards Niagara Falls.  We are starting to hear the roar, but have no idea when we will get to the edge.  With some major rescue efforts we can be saved, but there will be a point of no return and no one can tell us when that will be.

 

In summary, the risks associated with our present course are ever-increased environmental degradation coupled with a significant long lasting economic downturn, recession or depression.

 

As the old Chinese proverb states, “ if we do not change direction, we will likely end up where we are heading.”  A simple look at the numbers story tells us that we must change direction dramatically, with vision and conviction.

 

As a world community, we must realize that we will need the last remaining decades of fossil fuels to create and integrate new energy sources without losing the momentum of our developing world society.  Because the US is a major user of energy per capita and we affect environmental issues by both example and laws, we must lead on these issues.

 

Where do we want to be in 20-30 years from now as a country and a world? 

We want to be at a point in our global development where we are no longer dependant on fossil fuels for our energy generation and we want to arrive there by a route that does not create global environmental and economic chaos.

 

How do we get there from here?

Because of the long development, manufacturing and replacement times needed to replace our present infrastructure we need to start now.  A leading energy intelligence analyst, retired Army Lt. Col. Tom Bearden wrote me stating that there will be a “point of no return” by about 2003-2005, after which there will be world economic collapse five years later when the escalating oil prices have gone through the roof.  He is suggesting that we must have replacement technologies on line on a very short time scale.

 

Proposed Step One.  Hold a Senate hearing to get the ball rolling.  This will show us that there is a major problem looming on the near horizon and the witnesses we have will testify to the fact that there are presently a set of technologies that can help resolve them on a relatively short time scale.

 

Proposed Step Two.  Once the hearing is held then we move to an action step.  As stated by Lt. Col. Bearden on this subject: In short the solution to the energy crisis is solvable, permanently, in a rather straightforward fashion.  We need a fine scientific team and a set of laboratories, working on it in a Manhattan style project, and in three years the systems will be ready to roll of the mass assembly lines.  This may need a Presidential Decision Directive and a National Emergency so the project can utilize whatever is available for quick development.  He may or may not be overly optimistic at this point.

 

What if we do not act now?

Again Lt. Col. Bearden’s comments:  “Make no mistake. This is the most deadly and certain strategic threat to the U.S. and the rest of the world, in all my experience.  If we do not solve this energy problem, and deploy it very, very quickly with a massive effort, then we will overrun the 2003 "point of no return" and, just as an airplane does when it overruns the point of no return on the runway, this nation will be heading for a total crash, as surely as the sun will rise tomorrow.  Yet everywhere one looks, one sees "business as usual,"  "trust us, we know best". . . . . .

 

Footnotes

 

1. A Rocky Mountain Institute report published on their Website at http://www.rmi.org/sitepages/pid124.asp) states: 

 

Depending on which study you read, 1999 was either the fifth or the sixth warmest on record globally (1998 was the all-time warmest). Seven of the ten warmest years since record-keeping began were in the 1990s, and analysis of tree rings, ice cores, and so on suggests that the decade was the warmest of the millennium.  A January 2000 National Academy of Sciences study concluded that "the warming trend in global-surface temperature observations during the past 20 years is undoubtedly real and is substantially greater than the average rate of warming during the 20th century."

 

2. Esterbrook, Gregg.  Hooray for Expensive oil! Opportunity cost. New Republic (May 15, 2000),  p. 21-25.

 

 

 

The Author

Dr. Theodore Loder is a Professor of Earth Sciences and a member of the Institute for the Study of Earth, Oceans, and Space at the University of New Hampshire where he has taught since 1972.  His research is in the area of oceanography and environmental changes dealing with estuarine and coastal issues and has worked in New England, England, Australia, Jamaica, Norway, Sweden, and off the coast of South America.   He has published over 40 scientific papers and reports in these areas.  His recent research involves the application of new technologies to solve environmental problems and arising future economic problems due to our overuse of and over dependence on fossil fuel technologies.

 


New Energy Solutions And Implications For The National Security

And The Environment: A Brief Overview for the US Senate

Steven M. Greer MD

 

The ultimate national security issue is intimately linked to the pressing environmental crisis facing the world today: The question of whether humanity can continue as a technologically advanced civilization.

Fossil fuels and the internal combustion engine are non-sustainable both environmentally and economically - and a replacement for both already exists.  The question is not whether we will transition to a new post-fossil fuel economy, but when and how. The environmental, economic, geopolitical, national security and military issues related to this matter are profound and inextricably linked to one another.

The disclosure of such new energy technologies will have far-reaching implications for every aspect of human society and the time has come to prepare for such an event. For if such technologies were announced today, it would take at least 10-20 years for their widespread application to be effected.  This is approximately how much time we have before global economic chaos begins due to demand far exceeding the supply of oil and environmental decay becomes exponential and catastrophic.

We have found that the technologies to replace fossil fuel usage already exist and need to be exploited and applied immediately to avert a serious global economic, geopolitical and environmental crisis in the not-so-distant future.

In summary, these technologies fall into the following broad categories:

·        Quantum vacuum/ zero point field energy access systems and related advances in electromagnetic theory and applications

·        Electrogravitic and magnetogravitic energy and propulsion

·        Room temperature nuclear effects

·        Electrochemical and related advances to internal combustion systems which achieve near zero emissions and very high efficiency

A number of practical applications using such technologies have been developed over the past several decades, but such breakthroughs have been either ignored due to their unconventional nature - or have been classified and suppressed due to national security, military interests and ‘special’ interests.

Let us be clear: the question is not whether such systems exist and can be viable replacements for fossil fuels. The question is whether we have the courage to allow such a transformation in world society to occur.

Such technologies - especially those which bypass the need to use an external fuel source such as oil or coal - would have obvious and beneficial effects for humanity.  Since these technologies do not require an expensive source of fuel but instead use existing quantum space energy, a revolution in the world’s economic and social order would result.  These implications include:

·        The removal of all sources of air pollution related to energy generation, including electric power plants, cars, trucks, aircraft and manufacturing;

·        The ability to ‘scrub’ to near zero effluent all manufacturing processes since the energy per se required for same would have no cost related to fuel consumption. This would allow the full application of technologies which remove effluent from smokestacks, solid waste and waterways since current applications are generally restricted by their energy costs and the fact that such energy consumption - being fossil fuel based - soon reaches the point of diminishing returns environmentally.

·        The practical achievement of an environmentally near-zero impact yet high tech civilization on earth, thus assuring the long-term sustainability of human civilization.

·        Trillions of dollars now spent on electric power generation, gas, oil, coal and nuclear power would be freed to be spent on more productive and environmentally neutral endeavors by both individuals and society as a whole.

>Underdeveloped regions of the earth would be lifted out of poverty and into a high technology world in about a generation - but without the associated infrastructure costs and environmental impact related to traditional energy generation and propulsion. Since these new systems generate energy from the ambient quantum energy state, trillion dollar infrastructure investments in centralized power generation and distribution would be eliminated.  Remote villages and towns would have the ability to generate energy for manufacturing, electrification, water purification, etc. without purchasing fuels or building massive transmission lines and central power grids.

>Near total recycling of resources and materials would be possible since the energy costs for doing so - now the main obstacle - would be brought down to a trivial level.

·        The vast disparity between rich and poor nations would quickly disappear - and with it much of the zero-sum-game mentality which is at the root of so much social, political and international unrest.  In a world of abundant and inexpensive energy, many of the pressures, which have led to a cycle of poverty, exploitation, resentment and violence would be removed from the social dynamic. While ideological, cultural and religious differences would persist, the raw economic disparity and struggle would be removed from the equation fairly quickly. 

>Surface roads- and therefore most road building - will be unnecessary as Electrogravitic/ antigravity energy and propulsion systems replace current surface transportation systems.

·        The world economy would expand dramatically and those advanced economies such as in the US and Europe would benefit tremendously as global trade, development and high technology energy and propulsion devices are demanded around the world. Such a global energy revolution would create an expanding world economy which would make the current computer and Internet economy look like a rounding error. This really would be the tide which would lift all ships.

·        Long term, society would evolve to a psychology of abundance, which would redound to the benefit of humanity as a whole, a peaceful civilization and a society focused increasingly on creative pursuits rather than destructive and violent endeavors. 

Lest all of this sound like a pipe-dream, keep in mind that such technological advances are not only possible, but they already exist.  What is lacking is the collective will, creativity and courage to see that they are applied wisely. And therein lies the problem.

As an emergency and trauma doctor, I know that everything can be used for good or for ill. A knife can butter your bread - or cut your throat. Every technology can have beneficial as well as harmful applications.

The latter partially explains the serious national security and military concerns with such technologies. For many decades, these advances in energy and propulsion technologies have been acquired, suppressed and classified by certain interests who have viewed them as a threat to our security from both an economic and military perspective.  In the short term, these concerns have been well-founded: Why rock the global economic boat by allowing technologies out which would, effectively, terminate the multi-trillion dollar oil, gas, coal, internal combustion engine and related transportation sectors of the economy? And which could also unleash such technologies on an unstable and dangerous world where the weapons applications for such technological breakthroughs would be a certainty?  In the light of this, the status quo looks good.

But only for the short term.  In fact, such national security and military policies - fed by huge special interests in obvious industries and nations - have exacerbated global geopolitical tensions by impoverishing much of the world, worsening the zero-sum-game mind set of the rich vs. poor nations and brought us to a world energy emergency and a pending environmental crisis.  And now we have very little time to fix the situation. Such thinking must be relegated to the past.

For what can be a greater threat to the national security than the specter of a collapse of our entire civilization from a lack of energy and global chaos as every nation fights for its share of a limited resource? Due to the long lead time needed to transform the current industrial infrastructure away from fossil fuels, we are facing a national security emergency which almost nobody is talking about. This is dangerous.

It has also created a serious constitutional crisis in the US and other countries where non-representative entities and super-secret projects within compartmented military and corporate areas have begun to set national and international policy on this and related matters - all outside the arena of public debate, and mostly without informed consent from Congress or the President.

Indeed this crisis is undermining democracy in the US and elsewhere. I have had the unenviable task of personally briefing senior political, military, and intelligence officials in the US and Europe on this and related matters.  These officials have been denied access to information compartmented within certain projects, which are, frankly, unacknowledged areas (so-called ‘black’ projects).  Such officials include members of the House and Senate, President Clinton’s first Director of Central Intelligence, the head of the DIA, senior Joint Staff officials and others.   Usually, the officials have little to no information on such projects and technologies - and are told either nothing or that they do not have a ‘need to know’ if they specifically inquire.

This presents then another problem: these technologies will not be suppressed forever. For example, our group is planning a near term disclosure of such technologies and we will not be silenced.  At the time of such a disclosure, will the US government be prepared?  It would behoove the US government and others to be informed and have a plan for transitioning our society from fossil fuels to these new energy and propulsion systems.

Indeed, the great danger is ignorance by our leaders of these scientific breakthroughs - and ignorance of how to manage their disclosure.  The advanced countries of the world must be prepared to put systems in place to assure the exclusive peaceful use of such energy and propulsion advances.  Economic and industrial interests should be prepared so that those aspects of our economy which will be adversely affected (commodities, oil, gas, coal, public utilities, engine manufacturing, etc) can be cushioned from sudden reversals and be economically ‘hedged’ by investing in and supporting the new energy infrastructure. 

A creative view of the future - not fear and suppression of such technologies - is required. And it is needed immediately. If we wait 10-20 more years, it will be too late to make the needed changes before world oil shortages, exorbitant costs and geopolitical competition for resources causes a melt-down in the world’s economy and political structures.

All systems tend towards homeostasis. The status quo is comfortable and secure. Change is frightening. But in this case, the most dangerous course for the national security is inaction. We must be prepared for the coming convulsions related to energy shortages, spiraling costs and economic disruption. The best preparation would be a replacement for oil and related fossil fuels. And we have it. But disclosing these new energy systems carries its own set of benefits, risks and challenges. The US government and the Congress must be prepared to wisely manage this great challenge.

Recommendations for Congress:

·        Thoroughly investigate these new technologies both from current civilian sources as well as compartmented projects within military, intelligence and corporate contracting areas;

·        Authorize the declassification and release of information held within compartmented projects related to this subject;

·        Specifically prohibit the seizing or suppression of such technologies

·        Authorize substantial funding for basic research and development by civilian scientists and technologists into these areas;

·        Develop plans for dealing with disclosing such technologies and for the transition to a non-fossil fuel economy. These plans should include:  military and national security planning; strategic economic planning and preparation; private sector support and cooperation; geopolitical planning, especially as it pertains to OPEC countries and regions whose economies are very dependent on oil exports and the price of oil; international cooperation and security; among others.

I personally stand ready to assist the Congress in any way possible to facilitate our use of these new energy sources.  Having dealt with this and related sensitive matters for over 10 years, I can recommend a number of individuals who can be subpoenaed to provide testimony on such technologies, as well as people who have information on unacknowledged special access projects within covert government operations which are already dealing with these issues.

If we face these challenges with courage and with wisdom together, we can secure for our children a new and sustainable world, free of poverty and environmental destruction. We will be up to this challenge, because we must be.

October 16, 2000

Steven M. Greer MD                                                    

President and CEO Quantum Energy

Albemarle County, Virginia                                           

 

The Author

Dr. Steven Greer is an emergency physician and former chairman of the Department of Emergency Medicine at Caldwell Memorial Hospital. He is a lifetime member of Alpha Omega Alpha, the nation's most prestigious medical honor society.  Inspired, in part, by his uncle who helped design the original lunar module, Dr. Greer has spent years researching exotic energy and propulsion systems.  He has been examining what systems have been developed and how the implementation of those systems would affect the environment and society as a whole.  He has met with and provided briefings for senior members of government, military and intelligence operations in the United States and around the world, including senior CIA officials, Joint Chiefs of Staff, White House staff, senior members of Congress and congressional committees, senior United Nations leadership and diplomats, senior military officials in the United Kingdom and Europe and cabinet-level staff members of the Japanese government, among others.  Dr. Greer has addressed tens of thousands of people live at conferences and lectures around the world including the international convention for MENSA, The Institute of Noetic Sciences Board of Directors, and the Sierra Club.


The Right Time to Develop Future Energy Technologies

 

Prepared for: Senator Bob Smith, Senate Committee on the Environment and Public Works

 

Prepared by: Thomas Valone, MA, PE, Integrity Research Institute, 1220 L St. NW #100-232, Washington, DC 20005 http://www.integrity-research.org  iri@erols.com

202-452-7674, 800-295-7674

 

Introduction to Compelling Evidence about the Coming Climate Change

 

In 1900, Nikola Tesla, the father of AC electricity, warned against using fuel for energy.[1] Current man-made Greenhouse Forcing of the atmosphere has been measured to be 2.4 – 4.3 W/m2 by the Global Warming International Center (GWIC). “A change of 7.5 to 10 W/m2 will completely alter seasonal characteristics, e.g. from winter to spring. Thus, 2.4 – 4.3 W/m2 of Greenhouse Forcing is quite a significant alteration of energy balance.” This is a measure of the watts (energy) per meter squared (area) that is being radiated into the atmosphere from our excessive carbon–based emissions. Note carefully that in 1997, the Institute for Policy Studies released a report that declared the World Bank was solely responsible for DOUBLING the world’s output of carbon by its overseas fossil fuel investments through the life of the investment.[2] This simple comparison of two different studies suggests that the DOUBLING of our Greenhouse Forcing into a range of 4.8 – 8.6 W/m2 may be anticipated in the next couple of decades.

 

 The GWIC 1999 News Flash went on to further conclude:

 

 “The man-made alteration of energy balance in the General Circulation system determines how chaotic our atmospheric and oceanic systems will be...simple thermodynamics predicts an OSCILLATORY NATURE of the change in climate in any one ecological zone due to global warming. Global warming causes ‘extreme events’ and bad weather in the near term. In the long term it may cause the earth to transition to another equilibrium state through many ‘oscillations in climatic patterns.’ The magnitude of these oscillations could easily ‘exceed’ the difference between the end points.”

 

From chaos theory, the end points are where we start and where we end up. In other words, as the earth climate seeks a new equilibrium point, with the forcing function of increased energy input, it may get much hotter AND much colder with a vengeance as the climate goes haywire for an undetermined amount of time.

 

Make no mistake about it, the earth has now surpassed 300 ppb (parts per billion) of CO2 (a potent greenhouse gas) for the first time in 400,000 years, according to ice core analysis by Tom Wigley from the National Center for Atmospheric Research. He also stated on a recent NOVA program that we need to cut fossil fuel use by 50% or more to stabilize CO2 because of increased energy demand that is predicted to be 60% more by 2020. Worse than that is the projected level of CO2 by 2050: an astounding 600 ppb! At the same time, Oxygen Inventory Depletion (OID) is occurring: worldwide levels of oxygen have decreased by 50-70 ppm since 1958 when the measurements were first taken.[3]

 

Need we mention that right now the Arctic ice is melting at a rapid rate? In 1999, scientists reported that 46 years of data documenting the declining extent of the Arctic sea ice yield a 98% probability that it is due to man-made causes.[4] The average annual temperatures in Alaska and Siberia have climbed as much as seven (7) degrees F in the past two decades reducing sea ice thickness by about 40% of what it was in 1980.[5] Why is the loss of this natural heat sink important? The Arctic sea ice covers an area the size of the United States. Without this natural reflector of solar energy, the same area of exposed ocean water will absorb as much as 100 times more solar energy than ice. This new energy influx will, of course, simply ADD to the already accelerating global warming due to greenhouse gases.

 

To summarize, “experts believe human activities could be ending the period of relative climatic stability that has endured over the last 10,000 years, and that permitted the rise of agricultural and industrial society.”[6]

 

Is Global Warming Harmful to Health?

 

In a word: YES!

 

 “Computer models have predicted that global warming would produce several changes in the highlands: summit glaciers (like North Pole sea ice) would begin to melt, and plants, mosquitoes and mosquito-borne diseases would migrate upward into regions formerly too cold for them. All these predictions are coming true.”[7]

 

Dr. Epstein, Associate Director at the Center for Health and the Global Environment at Harvard Medical School, further reports that the West Nile virus, spread by mosquitoes, broke out for the first time in N. America just last year. Washington residents know that it has already spread to Maryland in October, 2000. “Malaria and dengue fever are another two of the mosquito-borne diseases most likely to spread dramatically as global temperatures head upward.” Regarding these diseases, it is important to note that NO VACCINE is available and the causative parasites are becoming resistant to standard drugs. El Ninos are expected to become more common and severe—which means that the diseases they produce could become more prevalent as well (such as waterborne diseases like cholera). He concludes that, “Cleaner energy sources must be put to use QUICKLY AND BROADLY, both in the energy-guzzling industrial world and in developing nations, which cannot be expected to cut back on their energy use...The world’s leaders, if they are wise, will make it their business to find a way to pay for these solutions.”

 

How Much will it take to Correct the Climate Problem?

 

“The Intergovernmental Panel on Climate Change, established by the United Nations, calculates that halting the ongoing rise in atmospheric concentrations of greenhouse gases will require a whopping 60% to 70% reduction in emissions.”[8]

They are not the only agency arriving at that conclusion. The Worldwatch Institute concurs, stating that “stabilizing atmospheric CO2 at safe levels will require a 60-80% cut in carbon emissions from current levels.”[9]

 

Can Oil Production keep up if we Ignore the Climate Change?

 

In a word: NO! If we just continue as we do today with the selfish, business-as-usual attitude and clamor for more oil, do we stand a chance of enjoying a reasonable lifestyle for the next twenty years? Seeing that approximately 80% of the oil produced today comes from fields discovered before 1973, most of which are in decline, we must hesitate before coming to an optimistic conclusion. If we realize that the TOTAL world production of oil has increased less than 10% in the past two decades, then we might start to get concerned.[10] If we think about the fact that the U.S. energy demand grows at a rate of 1.1% per year, from 95 to 121 quadrillion Btus (quads) by 2020, we must ask where will the EXTRA 27% come from? Transportation is rated by the U.S. Department of Energy to be the most rapidly growing sector. However, as domestic crude oil production is projected to DECLINE from 6.3 to 5.3 million barrels per day by 2020, we gas-guzzling Americans naively believe that we can demand FROM SOMEWHERE a 30% increase from 2.90 million barrels of oil per day to 3.81 million barrels of oil per day by 2020![11]

 

Instead, the OPEC nations, where 50% of our imported oil comes from, have a different story in mind for us. World production of oil is expected to peak by 2010 and then begin to decline, which will forcibly reduce production.[12] Knowing this fact, give or take a few years, the OPEC nations decided instead to decrease their output of oil NOW by only 1.2% in 1999 which drove prices up dramatically, causing a lot of oil-addicted nations to complain bitterly in protest. The protests had no effect on the producers. “OPEC Blames Taxes for High Oil Prices” read the headlines in the Washington Post (9-29-00, p. A22) which went on to say:

 

  “Saudi Arabia is the only OPEC nation with the capability to boost oil production significantly, a move that would harm the finances of other member nations...”

 

The conclusion is obvious: It is nearly impossible, even with the “hard-line approach” advocated by G.W. Bush, to continually increase our imports of and addiction to oil even over the next ten years while OPEC is already beginning THE SQUEEZE. In September, 2000, the first OPEC summit in 25 years was held. As the U. S. and European Union called on OPEC to increase production, OPEC simply agreed to “provide adequate, timely and secure supplies of oil to consumers at fair and stable prices.” Of course that’s what any dominant dealer with 2/3 of the market will do! With Iraq selling the U.S. more oil than Kuwait is today, do we go to war over oil again?

 

Solving the Oil Consumption and Global Warming Problem Simultaneously

 

The clear answer to both dilemmas portrayed above is to begin a forced weaning process aimed at creating a government-mandated 1% reduction (based on Y2K usage) per year in oil consumption and/or oil imports every year for the next twenty years, with the second decade adding 1% to each year’s reduction. Phase I amounts to a mandatory reduction, on the average, of 200,000 barrels of oil per year, for the next ten years, yielding a 10% total reduction by 2010. Phase II, in 2010, would increase the reduction by 1% each subsequent year (2%, 3%, 4%, etc.) yielding a 55% + 10% = 65% total reduction by 2020. At first, a gradual reduction in oil imports by a fraction of 1% could be mandated with that fraction made up by domestic hybrid cars sales that have a tax incentive. The last few years of the decade program would have reductions greater than 1% mandated. This should be called the “The U. S. Energy Independence Initiative” or something like that. As a vital part of this process, a ten-year U.S. Energy Manhattan Project with emergency funds allocated to emerging energy developments (many of which are already invented) is required for successful replacement of current technology with carbon-free, fuel-less energy technologies.[13] A public education process needs to begin immediately as well to prepare all industrial, transportation, and housing sectors for the transition.

 

The reason for an average of 1% reduction in oil usage per year is that within ten years, a total of 10% (based on Y2K usage) reduction will be achieved. By then, fuel-less, carbon-free energy generators will be commercially available. That starts Phase II where an increasing amount of oil will be taken away from the market each year, before the OPEC nations force the issue.

 

End the Present Suppression of Emerging Energy Technologies

 

From my experience, the present management of the U.S. Energy Department, State Department, and Commerce Department has engaged in an outright and successful attempt to prevent viable emerging energy technologies from reaching the market and the public. They have rescinded legitimate grants that had already been awarded, prevented allowed patents from being issued, blocked approved conferences from taking place, and distorted accurate news before it is reported. Furthermore, certain non-profit organizations, most notably the American Physical Society, have abused their non-profit status by heavily lobbying government agencies and the media to encourage such suppression.

 

For example, the Public Affairs Coordinator for the American Physical Society, Dr. Robert Park, has further used his position of power to unduly influence the government and the media to target certain individuals and inventions, even to the extent of defaming their character, mine included, and depriving of their livelihood to suit his unscrupulous desires for scientific dominance. The Patent Office, State Department, and the Commerce Department, have been found on numerous occasions to obey his suggestions/demands on a particular issue. Examples and a chronology of such abuses have been cataloged. Both the U.S. Department of Energy (DOE) and the U.S. Patent Office have, for example, made public statements that clearly discriminate against cold fusion, a viable new physics discovery celebrating its tenth anniversary last year. Their practices of rescinding nuclear energy research grants or recalling a patent that already has been issued a patent number and posted in the Official Gazette, shows to what extent they will go to prevent anything resembling cold fusion from gaining recognition. One explanation seems to be stemming from the $249 million dollars that the hot fusion research program (Tokamak and laser confinement) are already receiving in FY 2000. However, these ongoing programs still do not have viable overunity output results even after decades of Federal DOE expenditures and will not for at least another two decades, according to the U.S. DOE! The suppression practices referred to above must stop in order to allow emerging energy technologies to reach the market.

 

Conclusion

 

In the short term, the development of a retrofit carburetor device for all cars, that reclaims or transmutes the carbon from the exhaust, can drastically reduce the emissions of CO2 from transportation vehicles. (The transportation sector presently contributes to 33% of the carbon emissions.)[14] Preliminary results from this type of device shows a dramatic improvement in mileage as well, making it attractive for consumers.[15]

 

As the new fuel-less, carbon-free energy sources are brought to market, the reduction in oil demands will become easier and more acceptable. If the U.S. Government establishes a time-table to meet the 65% reduction in CO2 emissions by 2020, ostensibly targeting the importation of oil, the earth can reverse its beginning of climatic oscillations with the present Greenhouse Forcing. I pray that our lawmakers will have the wisdom to adopt some of the above-mentioned measures to ensure our future.

 

The Author

See information following T. Valone’s second paper.

 


Future Energy Technologies

 

Thomas Valone M.A., P.E.

 

Integrity Research Institute, 1220 L St. NW #100-232, Washington, DC 20005

http://www.integrity-research.org

202-452-7674, 800-295-7674, Fax: 301-513-5728, email: iri@erols.com

 

 

Abstract

 

Today 85% of our country’s energy comes from the combustion of dead fossils, a dirty fuel that is forcing the world’s atmosphere to overheat. However, new 21st century energy sources that produce no carbon emissions and do not contribute to global warming are now emerging. Beyond the realm of fuel cells and hydrogen is the non-conventional world of “future energy.” Some of the best examples are new and exciting generators that release trapped potential energy from nature in ways never dreamed of before. Others innovatively apply clean fuels in conventional systems that are surprisingly simple and yet very efficient. Still others qualify as promising theoretical technologies that are a focus of attention for NASA and the USDOE. Most of them have one thing in common: they are very scientific but are relatively unknown to the general public. This presentation summarizes the latest breakthroughs in future energy. With scientific explanations of the input energy and output energy, the overunity efficiencies can be understood by average audience members. Included in the quantitative article are the inventions of Brown, Graneau, Jefimenko, Miley, Shoulders, Wallman, and others. The energy revolution is now beginning. It is time to understand the clean alternatives to dead, poisonous fuel.

 

Keywords: future energy, overunity, betavoltaic, biomass, COFE

 

Introduction

 

In 1998, the U. S. Department of Energy (DOE) issued its Comprehensive National Energy Strategy (CNES)[1] that included as one of its five goals, the following aspiration:

 

Goal IV: Expand future energy choices – pursuing continued progress in science and technology to provide future generations with a robust portfolio of clean and reasonably priced energy sources.   

  Objective 1. Maintain a strong national knowledge base as the foundation for informed energy decisions, new energy systems, and enabling technologies of the future.

  Objective 2.  Expand long-term energy options.

 

However, the DOE has not engaged in developing, much less maintaining a robust knowledge base of future energy choices, nor expanded research into new energy systems or long-term energy options, mainly due to upper management decisions. In a study performed by Integrity Research Institute on the progress of the CNES two years later, it is surprising that instead the DOE has worked to actively suppress enabling technologies of the future. Furthermore, concern for global warming and the expected increase in carbon emissions by the American society clearly do not enter the present DOE policies. The DOE instead recently: (1) endorsed natural gas use for future generations, (2) rescinded a Nuclear Energy Research Initiative (NERI) grant awarded to a prominent professor for transmuting radioactive waste, and (3) reversed an initial offer to host a Conference on Future Energy (COFE). Therefore, it is clear by these and many other DOE practices that it is up to the private sector to conduct scientific research into new energy systems and enabling technologies of the future in order to replace carbon-emitting fuel systems.

 

As a guideline, it is generally agreed that emerging energy technologies that qualify as true future energy must not produce carbon emissions nor contribute to global warming if we are to have a future planet earth. The reason for this is as Worldwatch Institute notes: “Stabilizing atmospheric CO2 concentrations at safe levels will require a 60-80 percent cut in carbon emissions from current levels, according to the best estimates of scientists.”[2]

 

Future Energy Overunity

 

To understand emerging energy principles, it is helpful to examine the operation of a heat pump, which converts environmental free energy into useful work. The standard heat pump is a good example of an “overunity” system (energy out > energy in) releasing potential energy from the environment where the heat energy output is always in the range of 2 up to 7 times the input electrical energy. This so-called “coefficient of performance” represents an overunity efficiency, that does not violate any physics laws, if one considers, as the consumer does, how much energy must he put in to get the predicted energy output. Thus, the concept of “overunity,” as also the concept of “free energy” has evolved from the consumer’s point of view. What does it cost him to receive his heat, air conditioning, cleaning, or propulsion outputs? The closer it gets to “free,” the more desirable it is for the consumer and, we might add, to third world countries who cannot afford to build the thousands of miles of high voltage wires (infrastructure) to support a centralized energy system. Locally installed, modular heat and electricity generators will replace present utility-based service in the future. Then, large area blackouts will be a thing of the past. Energy will be for the most part, a one-time investment, included in the house, car, or spaceplane of one’s choice. However, much needs to be done for these systems to supplant the established energy businesses that are the nation’s major polluters. A commitment to a carbon-free energy economy, with financial backing, is required for such large changes to take place.

 

Cold Fog Discovery

 

Many other systems exist today, in a research, development, or theoretical stage, which also convert potential energy into useful work. The first example is the “Cold Fog” invention of Dr. Peter Graneau from Northeastern University that converts chemical bond energy into kinetic energy. Intermolecular bond energy in water is an available amount of energy estimated at 2.3 kJ/g. When injected with a high voltage capacitor discharge of 39.8 Joules, normal rainwater is accelerated into a cold fog that loses about 31.2 Joules of low-grade heat and a comparable amount (29.2 Joules) in fog kinetic energy output. As reported in the Journal of Plasma Physics,[3] the output energy thus exceeds the input energy by about 100% creating a 2-to-1 overunity condition favorable for reduction to a motorized conversion system.

 

 

Text Box: Capacitor Input
Energy: 39.8 Joules
                                                  

                                                   

 

 

 

 

 

 


       Figure 1.  Cold Fog Energy Flow                             

 

 

 

 

Betavoltaic Battery

 

The next technology of importance is the betavoltaic battery invention of Dr. Paul Brown (U.S. Pat. #4,835,433). It involves a benign nuclear source called tritium (an isotope of hydrogen) that simply emits an electron (5.7 keV beta particle) over its half life of 12.5 years. The useful battery life is thus estimated to be about 25 years. It is a cheap, long-life, high energy density battery with a wide range of applications. Presently, Lucent Technologies has been contracted to produce the tritiated amorphous silicon for use in the semiconductor industry and even for watch batteries. The amorphous silicon is placed between two electrodes in order to complete the battery construction. The batteries have a mean energy density of 24 watts per kilogram and are ideal for low power, long-life applications [4]. It is clear that no recharging of these batteries is ever needed. The disposal is even safer than disposing of smoke detectors.


 


                                                  Figure 2. Tritium Battery                                 

 

Nuclear Remediation

 


It is worthwhile mentioning that Dr. Brown's other endeavor may give a boost to the nuclear power industry. He has discovered that low energy gamma rays (photons) on the order of 10 MeV, can function as an effective agent to transmute nuclear waste into short-lived isotopes, acceptable for burial anywhere. The remediation project is spear-headed by International Fission Fuels, Inc. which plans to build a pilot plant to accept nuclear waste of any type and generate electricity at the same time. The Battelle Institute, Brookhaven Labs, and Los Alamos Labs have all been involved in the planning and testing stages of this new technology. Dr. Brown presented details of this invention at COFE [4]. Also, the State Department recently connected him with foreign markets that have assisted in proving its worth.

                                               

Figure 3. Accelerator Driven Reactor

 

 

Electrostatic Motors

 

Text Box: Figure 4.  Electrostatic Motor ModelThe next energy breakthrough is Dr. Oleg Jefimenko's electrostatic motors. Discovered by Ben Franklin in the 18th century, electrostatic motors are an all-American invention. They are based on the physics of the fair-weather atmosphere that has an abundance of positive electric charges up to an altitude of 20 km. However, the greatest concentration is near the ground and diminishes with altitude rapidly. Dr. Jefimenko discovered that when sharp-pointed antennas are designed for a sufficient length to obtain at least 6000 volts of threshold energy, the fair-weather current density available is about a picoampere per square meter. Such antennas produce about a microampere of current. However, small radioactive source antennas may be used instead that have no threshold voltage and therefore no height requirements. Similar to a nuclear battery design of Dr. Brown, these antennas have larger current potentials depending upon the radioactive source used (alpha or beta source) and ionize the air in the vicinity of the antenna. Electrostatic motors are lighter than electromagnetic motors for the same output power since the motor occupies the entire volume. For example, it is expected that a motor one meter on a side will provide a power of one megawatt and weigh 500 kg or less. Electrostatic motors also require very little metal in their construction and can use mostly plastic for example. They can also operate from a variety of sources and range of voltages. As Dr. Jefimenko points out, "It is clear that electrostatic motor research still constitutes an essentially unexplored area of physics and engineering, and that electrostatic motor research must be considered a potentially highly rewarding area among the many energy-related research endeavors.”[5] The atmospheric potential of the planet is not less than 200,000 megawatts. He has succeeded in constructing demonstration motors that run continuously off atmospheric electricity. Jefimenko's largest output motor was an electret design that had a 0.1 Hp rating.[6] Certainly the potential for improvement and power upgrade exists with this free energy machine.

 

Biomass Gasification

 

Text Box: Figure 5. Gasification Demo
Photo: Alternative Energy Institute
Clean fuels are difficult to find today. One example that satisfies a limited definition of "clean" is the carbo-hydrogen gas produced from biomass. David Wallman has patented the process for producing COH2 from a high voltage discharge through any biomass solution (Pat. #5,417,817). This gas burns cleanly, producing water vapor and only the amount of CO2 that was originally absorbed by the biological mass when it was growing in the ground. Contrast this with burning fossil fuels (oil and natural gas) which resurrect old buried carbon and add it to the atmosphere from ancient cemeteries in the ground. Instead, biomass gas burning recycles recently absorbed atmospheric carbon dioxide. The input energy is typically about a thousand watt-hours or about 3300 BTU to produce about 250 liters per hour of carbo-hydrogen (8.5 cubic feet per hour). With a heating value of over 500 BTU per cubic feet, the COH2 output energy exceeds 4000 BTU, often approaching 5000 BTU in high efficiency designs. Thus, this biomass gasification process has an overunity efficiency of about 125% to 150%. However, when the entire energetics of the system are accounted for, including the ultraviolet light radiation, heat loss, etc., estimates of 200% to 400% are reasonable. Again, this process is a largely untapped resource while millions of gallons of farm-produced liquid biomass going to waste instead. Demonstrations of pilot plant designs are available from Wallman's company to replace present dependence on foreign oil (which is a fossil fuel). Municipal sewage treatment is a logical application for this invention.[7]

 

 

 

 

 

 

 

 

 

 

 

 

 


                                                      Figure 6. Biomass Gasification

 

 

Charge Clusters

 

An unusual energy source is the clustering of electrons by a discharge needle into a high density bundle equaling Avogadro's density of a solid[8]. Ken Shoulders has patented a process (Pat. #5,153,901) that produces electron clusters with such high energy density, they equal processes exceeding 25,000 degrees Celsius upon impact. Yet, he only uses 20 microjoules to produce the effects. The clusters travel at a maximum of one tenth of the speed of light and penetrate any substance with accuracy and sharp precision. It is similar to xenon clustering techniques currently used at megavolt energy levels. Low energy nuclear transmutation of the target has also been achieved with this process. Using a deuterium loaded palladium foil, only the bombardment areas show transmutation into silicon, calcium, and magnesium with electron clusters upon analysis with X-rays. Fox has postulated that the high velocity electron clusters achieve results similar to ion accelerators, including penetration of the nucleus, with substantially less power. The new physics of like-charges clustering in bundles under low power conditions opens a wide range of applications including spacecraft maneuvering microthrusters. The overunity efficiency is 9 to 1.

Figure 7. Charge cluster borehole into lead glass. Hole is about 10 micron diameter. Penetration is about 1mm per kV. The slowest speed clocked has been 1 cm in 50 nanoseconds. With an estimated 100 billion electrons carrying 100,000 positive ions, the kinetic energy exceeds 180 microjoules. It has been suggested that a Casmir effect pushes them together, overcoming Coulomb repulsion of like charges. (photo credit: Ken Shoulders)

 
 


                                                                       


 

 


Thin-Film Electrolytic Cell Power Unit

 

A product with the consumer in mind is Dr. George Miley's invention that produces about one watt per cubic centimeter of electrolyte[9]. Using a flowing packed-bed type electrolytic cell with 1-molar LiSO4 in light water, small (1-mm diameter) plastic beads with a thin (500-1000 angstrom) film of metal (nickel, palladium, or titanium) are employed. A special sputtering technique to spray on the metal is used. With 2-3 volts of electrical power and only 1-5 milliamperes of current, the single film experiments produce an excess power ten times the input power! (The input power is at most 0.01 watts while one half of a watt of heat is produced.) Observed power densities were 1 W/cc and above. It is also apparent that the physics of this reaction involve nuclear transmutations as well. As Dr. Miley notes: “The key finding from these studies has been the observation of a large array of “new” elements (i.e. different from the bead coating), many with significant deviations from natural isotopic compositions, after the run. Great care has been made to insure that these elements are distinguished from isotopic impurities by use of a “clean cell” with high purity components/electrolyte, in addition to the pre- and post-run analyses.” Even low-energy radiation was detected from the beads days after each experiment. Application to space power, providing a 1-kW cell with only 500 cc of active electrode is predicted. Note that this particular invention, with its large overunity energy yield, was awarded a NERI grant by the DOE but then promptly withdrawn after certain individuals pressured the DOE into a re-evaluation of its grant to Professor Miley. The politics that override such grant decisions by the DOE Office of NEST are highly questionable.

 

Conclusion

 

Future energy choices are already here. In spite of the DOE lack of initiative in long range energy solutions, private inventors in this article have pioneered energy discoveries with a range of energy production possibilities. With Dr. Graneau’s cold fog demonstrating a new energy source and a possible propulsion source, developmental efforts are ongoing with Hathaway Labs in Toronto to maximize the energy transfer to a useful machine for market. Dr. Brown’s tritium battery is a milestone for long-term energy demand that is in production, while his nuclear remediation project is progressing rapidly. Dr. Jefimenko’s electrostatic motors clearly demonstrate an available energy source yet untapped. Wallman’s biomass gasification is ready to be developed on a large scale. Shoulder’s charge clusters demonstrate extraordinary energy production on a microscopic scale with reasonable upscaling anticipated.  Dr. Miley’s electrolytic power unit also shows an extraordinary energy output, which deserves more research and development support. Other inventors that meet the future energy criteria include Dr. Deborah Chung, from the State University of N.Y. at Buffalo, who has discovered “negative” resistance in carbon fibers[10]. Another, James Griggs, the inventor of the hydrosonic pump (Pat. #5,385,298), represents an overunity “apparatus for heating fluids” which even exhibits sonoluminescence (now marketed by HydroDynamics in Rome, Georgia). Dr. Paulo Correa also qualifies with his pulsed abnormal glow discharge (PAGD) energy conversion system[11]. It is our belief that all of these inventions have the qualifications to be acceptable to energy futures. Also, theoretically and experimentally, there is growing support for a breakthrough in zero point energy conversion[12], which is the subject of more than one patent, the most recent being Dr. Frank Mead’s patent #5,590,031. Furthermore, the extraction of energy and heat from the vacuum has also been proposed by Drs. Harold Puthoff and Daniel Cole[13].  Certainly, if only the 2.6% disruption in the oil flow from the Mid-East in 1999 can cause immediate chaos in the gasoline prices in this country, we desperately need to cut the umbilical cord strangling us. Therefore, a more robust energy development effort is required to help us make the transition from dangerous fossil fuels. A more stable, long-term energy future is possible with new energy sources like these discussed in this article. 

 

 

 

References

 

1)     Comprehensive National Energy Strategy, U.S. Dept. of Energy, April, 1998, DOE/S-0124, (National Energy Policy Plan) available at http://www.hr.doe.gov/nesp/cnes.html

2)     State of the World 1999, Brown, Flavin, and French, W.W. Norton & Co., New York

3)     Hathaway, Graneau, and Graneau, “Solar-Energy Liberation from Water by Electric Arcs”, J. Plasma Physics, Vol. 60, Part 4, p. 775-86. ghathaway@ieee.org

4)     Brown, Paul, “Betavoltaic Batteries” and “Effective Radioactive Waste Remediation,”  Proceedings of the First International Conference on Future Energy, (Proceedings of COFE), p. 19 & 123, Integrity Research Institute, 1999, ISBN #0-9641070-3-1 (Alternatively, COFE CD-Proceedings on CD-ROM has twenty hours of lectures added in digital audio.) Brown’s email: brown@fissionfuels.com  

5)     Jefimenko, Oleg, “Electrostatic Energy Resources, Electrostatic Generators, and Electrostatic Motors,” Proceedings of COFE, p. 195

6)     Jefimenko, Oleg, Electrostatic Motors, Electret Scientific Co., Star City, WV, 1973 (future editions to be published by Integrity Research Inst.)

7)     Wallman, David, “Carbon Arc Gasification of Biomass Solutions,” Proceedings of COFE, p. 30.  (1350 Northface Ct., Colorado Springs, CO 80919) WD.Wallman@worldnet.att.net

8)     Shoulders, Ken and Steve, “Charge Clusters in Action”, Proceedings of COFE, p. 7 (P.O. Box 243, Bodega, CA 94922) email: krscfs@svn.net and Infinite Energy, “Charge Clusters in Operation,” Jan-Feb, 1997, p.62

9)     Miley, George, “Emerging Physics for a Breakthrough Thin-Film Electrolytic Cell Power Unit”, AIP Conference Proceedings #458, STAIF 1999, p. 1227-31. Reproduced with permission in Proceedings of COFE, p.140. email: g-miley@uiuc.edu

10)  Chung, Deborah, SUNY at Buffalo, 608 Furnas Hall, Buffalo, NY 14260

11)  Correa, Paulo, “Excess Energy Conversion System Utilizing Autogenous Pulsed Abnormal Glow Discharge,” Proceedings of COFE, p. 150 (Labofex Laboratory, 42 Rockview Gardens, Concord, Ontario L4K 2J6) email: lambdac@globalserve.net

12)  Valone, Thomas, “Understanding Zero Point Energy,” Proceedings of COFE, p. 58

13)  Cole and Puthoff, “Extracting energy and heat from the vacuum,” Physical Review E, vol. 48, No. 2, August, 1993.

 

 

The Author

Mr. Thomas Valone has degrees in electrical engineering and physics and is a professional engineer.  He is presently the President of the Integrity Research Institute in Washington, DC providing technical consultation for engineering and law firms, authors, videos.  Clients include Lightworks AV, Alternative Energy Institute, Starburst Foundation, The Magnetizer Group, Saladoff and Associates, ELF International, Sachs-Freeman Associates, AquaQueen, Newline Investments.  Services provided: Electrical product design/development, engineering testing, expert testimony and opinion, magnetic field measurement & shielding, circuit design.  He is the Editor of the Future Energy newsletter and has 3 books and 50 articles in print covering a broad range of engineering and energy issues.

 


Moving Beyond the First Law and Advanced Field Propulsion Technologies

Paul A. LaViolette, Ph.D.

gravitics1@aol.com

October 4, 2000

 

 

1. The Repression of Nonconventional Energy Technologies.

      According to U.S. patent law, a patent his the right to be issued if the technology is new and if it works.  There is nothing in the legal code that says that the patent necessarily has to conform to theories of physics or chemistry as they happen to be defined by certain academic science societies.  Unfortunately, administrators of the U.S. Patent and Trademark Office (PTO) have been illegally blocking the issuance of patents on new technologies that challenge current scientific thinking.  This discrimination is often carried out in response to lobbying by Robert Park, who is Director of Public Information of the American Physical Society (APS), and by his affiliates.  The process usually begins with media smear campaign aimed at defaming the inventors of nonconventional technologies or at embarrassing PTO examiners who hold scientific views they disagree with.  Then this group of lobbyists email these media attacks to PTO administrators, or they may call up PTO officials with whom they have developed close associations to voice their dissatisfaction.  The PTO administrators then respond in a knee jerk fashion to this outside pressure to either make sure that certain patents don't issue or to reprimand or even fire examiners who take an open minded approach to considering such new technologies.

      An example is the BlackLight Power Corp. case.  BlackLight's inventor Randall Mills has developed a process for producing large amounts of energy from normal tap water.  This is the kind of technology that we need to solve the present energy crisis.  The reality of this technology has been independently verified by other scientific laboratories.  Yet, Mills and his company have been repeatedly attacked by this APS lobby through Robert Park's news postings on the society website, derisive editorials written in mainstream science magazines, in lectures at the 1999 APS annual meeting, and even in a book authored by Park.  Because this technology challenges the currently popular theories of physics, this lobby has unjustly branded it as being fraudulent.  PTO administrators obediently responded to this outside pressure by unlawfully withdrawing one of BlackLight's patents after it had already been slated to issue in February 2000.  One of the PTO officials who was involved in taking this action has admitted that they did this in response to media attacks leveled against BlackLight.  The company is now suing the Department of Commerce for this travesty of justice.

      Another example concerns a patent awarded in February 2000 on an invention capable of sending communications faster than the speed of light.  Witnesses attested that the invention worked as claimed.  Yet shortly after the patent had issued, believing that the invention violated the theory of special relativity, Park posted a news item on the APS website which made fun of the PTO for having issued the patent.  Arrangements were even made to have one patent website proclaim it to be the most ridiculous patent of the year.  Papers published in refereed physics journals have described laboratory experiments in which waves have been made to travel faster than the speed of light.  Yet disregarding this evidence, the Commissioner decided to side with the APS lobbyists.  He severely reprimanded the patent examiner who had issued the patent and also threatened to fire his supervisor.

      Also there is the case of the firing of two patent examiners, Tom Valone and Paul LaViolette.  Park and the APS lobby had been ridiculing them because they had an interest in nonconventional energy technologies and because they were involved in organizing a conference that included papers on nonconventional energy technologies.  They attacked the examiners in postings on the APS website, in magazine editorials, and in lectures presented at the 1999 annual APS meeting where they admitted to their ongoing efforts to secure the removal of anyone at the PTO who sympathized with cold fusion technology.  They also initiated an email campaign to PTO officials as well as made personal contacts with PTO officials.  Within a day of this email blitz, Paul LaViolette was given notice of termination and proceedings were begun against Tom Valone which resulted in his removal 5 months later.  Both examiners at the time had a commendable record of job performance.  Both examiners now have Justice Department litigation pending on this matter.

      As a result of similar discrimination, government research moneys are routinely withheld from companies or individuals trying to develop such cutting edge ideas.  In the name of preserving an outmoded set of theories that they claim their particular view.  Government officials need to recognize that a working technology should not be suppressed just because it lies outside of the current scientific paradigm and produces results that refute that paradigm  The goal should be to solve society's problems, not to reaffirm outmoded theories espoused by today's enfranchised physicists and chemists.

 

2. The Nonconventional Energy Technology Bill of Rights.

Nonconventional technologies may be our only hope for solving the problems that presently lie ahead of us, but they are currently the underdog.   We need an affirmative action program to educate government agencies and mainstream media to develop a more positive attitude toward nonconventional technologies, to treat the researchers of these technologies in a fair manner, and to stop engaging in witch hunts.  If we are going to deal with the problems we face, the scientific community needs to make a radical paradigm shift.  They have to adopt a radically different attitude with respect to what is possible and what is not.  There is not much time.

 

3. The First Law of Thermodynamics is not inviolable.

      The First Law of Thermodynamics states that energy may be neither created nor destroyed.  But there is evidence that nature routinely violates the First Law. 

Energy creation: The discovery that the jovian planets (Jupiter, Saturn, Uranus, and Neptune) lie along the same luminosity trend line as stars of the lower main sequence (e.g. red dwarfs) throws a monkey wrench into theories of how stars generate their energy.  Nuclear energy cannot explain this correspondence.  One very simple solution to this problem is that a photon's energy is not constant, that photon's inside celestial bodies slowly blue shift – increase their energy over time. Thus energy is being continuously created in stars throughout the universe.  This so called "genic energy" emerges as a prediction of a new physics methodology called subquantum kinetics.  Since red dwarfs make up most of the stars in our Galaxy, as a rule genic energy may be the dominant energy creation mechanism.  Nuclear energy becomes important only in the much rarer, massive stars such as our Sun.  Consequently, most of the stars in the universe may be run on "free energy" in violation of the First Law.

      Although this rate of energy creation is ten orders of magnitude smaller than what can be detected in laboratory experiments, it nonetheless weakens the arguments of those who maintain that the First Law is an inviolable doctrine of nature.  If nature violates it, why can't we violate it also?  Physics needs to make a major shift in thinking, shed their linear models which predict that there is no such thing as a free lunch, and embrace the newly emerging nonlinear models which allow the possibility that matter and energy may be created and destroyed.

 

4. Gravity Field Propulsion is Real:  Townsend Brown's Technology of Electrogravitics.

      In the mid 1920's Townsend Brown discovered that electric charge and gravitational mass are coupled.  He found that when he charged a capacitor to a high voltage, it had a tendency to move toward its positive pole.  This became known as the Biefeld-Brown effect.  His important findings were opposed by conventional minded physicists of his time. 

      The Pearl Harbor Demonstration.  Around 1953, Brown conducted a demonstration for top brass from the military.  He flew a pair of 3 foot diameter discs around a 50 foot course tethered to a central pole  Energized with 150,000 volts and emitting ions from their leading edge, they attained speeds of several hundred miles per hour.  The subject was thereafter classified.

      Project Winterhaven.  Brown submitted a proposal to the Pentagon for the development of a Mach 3 disc shaped electrogravitic fighter craft.  Drawings of its basic design are shown in one of his patents.  They are essentially large scale versions of his tethered test discs.

      Aviation Studies International.  They are a think tank that produces intelligence studies for the military.  In 1956 they issued a report entitled "Electrogravitics Systems" which called for major government funding to develop Townsend Brown's electrogravitics technology and make Project Winterhaven a reality.  The report stated that most of the aerospace was actively researching this antigravity technology.  It named companies such as: Glenn-Martin, Convair, Sperry-Rand, Bell, Sikorsky, Douglas, and Hiller.  Other companies who entered the field included Lockheed and Hughes Aircraft, the latter being regarded by some as the world leader in the field.  This report was initially classified.  It was missing from the Library of Congress collection.  Their staff made a computer search and found that the only other known copy was located at Wright Patterson Air Force Base.  I obtained it from there through interlibrary loan.  It is now published in the book Electrogravitics Systems, T. Valone (editor).

      Northrop's Wind Tunnel Tests.  In 1968, engineers at the Northrop Corp. performed wind tunnel tests in which they charged the leading edge of a wing to a high voltage.  They were investigating how this technique could be used beneficially to soften the sonic boom of aircraft.  Hence they were performing large scale tests on Brown's electrogravitic concept.  Brown's R&D company had previously made known that sonic boom softening would be a beneficial side effect of this electrogravitic propulsion technique.  Interestingly, Northrop later became the prime contractor for the B-2 bomber.

      The B-2 Bomber.  In 1992, black project scientists disclosed to Aviation Week and Space Technology magazine that the B-2 electrostatically charges its exhaust to a high voltage and also charges the leading edge of its wing-like body to the opposite polarity.  This information led Dr. LaViolette in 1993 to reverse engineer the B-2's propulsion system.  He proposed that the B-2 is essentially a realization of Townsend Brown's patented electrogravitic aircraft.  The B-2 is capable of taking off under normal jet propulsion.  But when airborne, its electrogravitic drive may be switched on for added thrust.  This system can only be turned on under dry conditions.  If the B-2's dielectric wing were to become we, the applied high voltage charge would short out, which explains why the B-2 is unable to fly in the rain.  With electrogravitic drive, the B-2 is able to drastically cut its fuel consumption, possibly even to zero under high speed flight conditions. 

      The commercial airline industry could dramatically benefit with this technology which would not only substantially increase the miles per gallon fuel efficiency of jet airliners, but would also permit high-speed flight that would dramatically cut flight time.

      Subquantum Kinetics Predicts Antigravity Effects.  General relativity doesn't explain the Biefeld-Brown electrogravitic effect or any other antigravity phenomenon since it predicts that masses have just one gravitational polarity and should only attract one another.   It allows the possibility of charge-mass coupling, only at very high energies, such as those attainable in particle accelerators far more powerful than any thus far built.  The subquantum kinetics physics methodology, however, offers a much needed answer to the insufficiencies of relativity.  It predicts that gravitational mass should have two polarities (+ and -) and that these mass polarities should be correlated with the charge polarity of a particle.  According to subquantum kinetics, Brown's electrostatic disc should establish a gravitational field gradient from front to back which has the effect of propelling the disc forward.  The movement of the charges may contribute an even larger thrust effect.  The same would apply to the B-2 bomber.

 

5. Other Advanced Aerospace Propulsion Technologies.

      The Searl Electrogravity Disc and Russian Experiments.  This device, developed over 40 years ago by the British engineer John Searl, consisted of a segmented rotating disc each of whose segments was supported by a set of cylindrical permanent magnets rolling within a circumferential track.  It is alleged to have achieved complete lift off.  In the past few years two Russian scientists associated with the Russian National Academy of Sciences, Roschin and Godin, have built a simplified version of the Searl Disc that confirms its anomalous weight loss effects.  They spun a 1 meter diameter disc at 600 rpm and obtained a 35% reduction in its weight while at the same time generating a 7 kilowatt excess electric power output.

      The Podkletnov Gravity Shield and Project Greenglow.  A research team in Finland led by Dr. Podkletnov were experimenting with a rotating superconducting disc that was floated on a repelling magnetic field  generated by a series of electromagnets.  In 1996, they reported that the disc was able to partially screen the Earth's gravitational field, reducing the weight of objects positioned above the disc by two percent.  Greater weight reductions are envisioned by stacking several discs over one another.  Besides propulsion, there are obvious applications to tapping the resulting gravity differential for mechanical power generation.  In the last few years, BAE Systems a company formed by the merger of British Aerospace with Marconi Electronic Systems, has been researching the Podkletnov gravity shield.  They are doing this work under Project Greenglow, a project they have set up to investigate the feasibility of nonconventional technologies.

      The De Aquino Antigravity Effect.  A Brazilian university professor, Fran De Aquino, has produced a 50% weight reduction in a 2 foot diameter, annealed pure iron toroid weighing 77 pounds.  He does this by internally energizing the toroid with 10 kilowatts of 60 cycle electromagnetic radiation.  His data predicts complete weightlessness of the torroid could be achieved with a 15 kilowatt power input.

      Gravito Inertial Lift System.  Aerospace engineer Jim Cox has recently improved on the Dean Drive, an inertial propulsion engine that was patented in May 1959.  He reports tests demonstrating an upward thrust equal to 90% of the engine's weight.  It uses a 1/4 horsepower motor to revolve two counter-rotating rotors, each about 1 cm in diameter, spinning them at about 600 rpm for a power consumption of about 200 watts.  The lift is gotten by sinusoidally oscillating the rotors up and down and coupling them to the lift platform on their upward stroke.  He obtains about 45 pounds of lift force per horsepower (~55 pounds/kw).  He plans by the end of the year to have a freely lifting device which would be spun to 1200 rpm with a 1/2 horsepower motor drawing 400 watts.  He estimates that using this technology a 200 horsepower automobile engine would be capable of generating a lift force of about 9000 pounds.

      Kineto-baric Field Propulsion.  German scientist Rudolph Zinsser discovered that sawtooth electromagnetic waves could be made to push distant objects.  He produced a radio tube circuit that transmitted 45 megahertz radio waves having a sharp rise and gradual fall.  His experiments demonstrated that these waves could exert impulses of up to 104 to 105 dyne seconds, which is equivalent to the application of about 1 to 3 ounces of force for a period of one second.  He found that this force could be generated with an amazingly low input power, the output-force–to–input-power ratio surpassing that of conventional propulsion methods by several powers of ten.  His projections imply a thrust of 1350 pounds force per kilowatt. 

      Field Thrust Experiments on Piezoelectrics.  James Woodward, a physics professor at Cal State Fullerton, is conducting research that indicates that electromagnetic waves can induce lofting forces in piezoelectric ceramic media.  His ideas are described in a 1994 U.S. patent and in a 1990 physics journal article.  Woodward has conducted experiments that confirm this thrust effect in the audio frequency range (~10,000 Hertz), and his calculations suggest that it may be substantially increased at higher frequencies, with optimal performance being obtained in the microwave range (0.1 to 10 gigahertz).  His work has gotten some support from DoE.

 

The Author

Since 1984, Dr. Paul LaViolette has been president of the Starburst Foundation, an institute that conducts interdisciplinary scientific research in physics, astronomy, geology, climatology, systems theory, and psychology.  He has degrees in physics and Systems Science and has authored four books: The Talk of the Galaxy (2000), Earth Under Fire (1997), Beyond the Big Bang (1995) and Subquantum Kinetics (1994).  In addition he has authored 34 papers appearing in books, scientific journals and conference proceedings on topics ranging from subquantum kinetics and the unified field theory, to ice polar ice cores to the big bang theory and to antigravity research.  He is listed in the 1996 edition of Marquis Who's Who in Science and Engineering.

 

 


Accountability and Risk in the Information Era: Lessons

 Drawn From the “Cold Fusion” "Furor"

 

Prepared by Dr. Scott Chubb, Naval Research Laboratory

October 6, 2000

 

Background

            Nature does not lie.  But it can fool us.  Also, we frequently fool ourselves. When media attention, the politics of money and prestige, the possibility of extraordinary wealth, and the fear of embarrassment also become part of the equation, the resulting situation can rapidly escalate into a minefield of confusion. For this reason, “taking risks,” especially about areas involving science and technology, always can be dangerous.  When opinion becomes part of the process, risk-taking can take on an identity of its own

 

An extreme example of this occurred eleven years ago when Stanley Pons and Martin Fleischmann (PF) took an “extraordinary risk” by  “implying” it was possible to “create a room-temperature hydrogen bomb in a test-tube”1.  Almost immediately, their “suggestion” “for new research” “not only ‘was’ ‘discredited’”, but with time, scorn and ridicule (even open harassment) routinely became part of the lives of individuals who have paid attention to it2.  However, despite the apparent meltdown, in public opinion, about Cold Fusion (CF), CF research has continued.  An obvious question is why?

 

What’s New:

 

Clearly, one might ask one of two questions: 1. “Are those who remained involved ‘fooling nature’ or ‘themselves’?,” or 2. “Are those who are ‘responsible’ ‘for’ ‘harassing’ those who have remained involved ‘been fooled’?”  In fact, at the core of both questions are two key issues: 1. The degree that individuals (or groups of individuals) can take “risks” and also avoid “appearing” to be “foolish”, or when or how (as a result of policy decisions, for example) can the “perception of appearing to be foolish” be “augmented” in a useful way to such a degree that a “useful” way “to be foolish” can occur, or 2. How, given the need to satisfy budget constraints and be “persuasive and credible”, do we deal with ideas that are difficult to accept? 

 

            Recently, while serving as guest editor of an Ethics in Science journal, titled “Accountability in Research3,” I dealt with this issue.  Specifically, I asked a number of senior individuals on both sides of the Cold Fusion debate to deal with the following question: regardless of whether or not Cold Fusion (CF) claims have merit, were (or are) there lessons that can be learned from the on-going situation?  Almost universally, the various authors agreed on three general ideas: 1. “Normal” scientific discussion about CF ended at a very early stage, 2. The “breakdown” of “Normal” scientific discussion not only has not been widely accepted outside the field, but 3. Although the reasons for this “breakdown” are not clear, the “failure” by particular “individuals” or “institutions” to be held “accountable” for past actions has been largely responsible for this problem.  Implicit in these assertions is an obvious point.  “Cold Fusion” “was” “and is” a “risky” “form” of “science.”  “Discussions about CF” have “ceased” “to be” “normal” “for precisely this reason.”  But there is a more poignant message:  despite the fact that research in CF has continued, not only have the initial “critics” largely avoided the subject, even though many of their criticisms have been adequately addressed, most scientists are simply unaware of this fact.  An important reason for this is that many of the institutions that are involved either in disseminating information about science or in adjudicating science have largely ignored what has been going on.

 

Impact:

 

There is an important lesson associated with this that applies not only in science, but in most forms of human interaction.  For communication to occur, some form of accountability is necessary.  (This is especially true when risk is involved.)  Institutions and individuals must be held accountable for their actions for an obvious reason: the need to maintain trust.  Specifically, when a particular party or group requests that an individual or institution be held accountable for a particular action, implicitly, trust occurs.  This is because at a very basic level, for communication to occur at all, it is necessary that the parties mutually trust each other.  The process of assigning accountability for a particular action involves the identification of a particular liability (or responsibility) that can be directly associated with a particular action.  When the associated liability or responsibility is clearly identifiable, the degree of accountability can be quantified.  Because in situations involving risk, the associated liability can be difficult to define, procedures for assigning accountability become less tangible.

 

In “normal” circumstances, “liability” and “responsibility” and “accountability” not only can all be identified and related to each other but can be quantified either by precedent or through the potential for pecuniary damages or rewards (as defined through the marketplace, for example).  Thus, typically, accountability can be measured using flows of information, ideas, money, or technology, almost in terms of a marketplace type of scenario.  Then “liability” and “responsibility” can be defined in terms of how these processes are enhanced or impeded by a particular set of actions.  When “risk” becomes part of the “scenario”, however, this picture becomes altered, significantly.

 

For this reason, within the context of “normal” science, it is relatively easy to identify the terms of accountability.  However, when the relevant “science” ceases to be “normal,” because of “risk,” the terms associated with accountability cease to be as clearly defined.  In fact, “risk” “as it applies to CF”, in a grander context, also applies to “bold” or “new” initiatives.  And many of the lessons from the CF controversy involving “risk" can be viewed as having more-far-reaching lessons associated in policy-decisions involving a particular individual or groups of individuals.

 

Ironically, in the case of CF, the advent of Information Era technologies seems to have eroded the underlying communication problem.  In particular, at an early stage, considerable confusion occurred as a result of the widespread dissemination of incomplete (and incorrect) information about the associated experiments, by FAX machines, and through the Internet.  The resulting "discourse" quickly became distorted.  This situation not only seriously undermined the scientific review process but seems to have been at least partly responsible for the fact that established scientific journals do not publish information about CF.

 

In the talk, I will summarize my involvement with CF, as well as several important conclusions that I have summarized in my Introduction to the special two issue collection of articles from the Ethics in Science journal, Accountability in Research3, where a number of senior individuals involved in the controversy have examined the associated breakdown in scientific dialogue, about this topic.  Important implications of the work include the need for greater investment in Science in “formal” and “informal” “ways”.  In particular, it is apparent that a “rush-to-judge” mentality was present in 1989 that clearly was related to funding (or loss of funding).  This not only included a number of “obvious ‘non-scientific’ ‘events’, and ‘reviews’” involving a number of organizations (most notably the American Physical Society, the Department of Energy, and the Patent Office)” but other actions, including non-scientific intervention (involving the American Physical Society and the Department of Energy) that appear to have been prompted by a lack of sufficient funding. 

 

The effect of this process is simple: after 11 years, not only have the relevant scientific issues not been adequately represented, serious questions about the adjudication process that is responsible for this should be addressed.  The Congress, the President, and the Courts are the final bodies that “should be ‘held’ ‘accountable’”, with regard to these issues.  Science can not be objective when the “bodies” “that ‘hold’ ‘Science’ ‘captive’” are not willing to “investigate” “Science.”  It is not only plausible but likely that others, besides those involved with the government, will be assigned “blame” “for injustices” associated with “Cold Fusion.”  However, I believe this view is shortsighted.  In my opinion, the institutions mirror investment.  Scientists will only feel free to take risks when they are sufficiently protected to do so.  In 1945, we felt compelled to “protect science.”  In 2000, this seems to be a forgotten message.  Innovative Energy ideas, “risky ideas”  (which “wouldn’t be ‘so risky’ if scientists had adequate funding”) are left unexplored, as a consequence.

References:

 

1This quote paraphrases comments from a number of popular sources of information (the popular press, newspapers, etc).  It typifies the kind of imprecise, anecdotal information about Cold Fusion that, somewhat surprisingly, is still commonly believed to have been attributed to Pons and Fleischmann, and Jones et al.  In fact, compelling evidence exists that novel forms of nuclear reaction exist, without high energy particles; http://www.infinite-energy.com.

 

2 Charles G.Beaudette, Excess Heat: Why Cold Fusion Research Prevailed. (Oak Grove Press, LLC, ME, 2000). (ISBN 09678548-06; available through http://www.amazon.com, http://www.infinite-energy.com;  hardcover $36.95; softcover $26.95, distributed by INGRAM and Infinite-Energy Press).

 

3Scott R. Chubb, “Introduction to the Special Collection of Articles in Accountability in Research Dealing With ‘Cold Fusion’”, in Accountability in Research, v. 8, #1 and #2. (eds. A. E. Shamoo, and S. R. Chubb, Gordon and Breach, Philadelphia, 2000). (http://www.gbhap-us.com/journals/149/149-top.htm)

 

 

The Author

Dr. Scott Chubb is a Research Physicist at the Naval Research Laboratory (Sept., 1989-present):  His research areas are in: Remote Sensing of the Ocean (1989-present),  Precision Time and Space (Space Technology Division,1989-1994), related to atomic clocks, General and Special Relativity; investigated applications of microwaves in surveillance of the ocean (1989-2000). Assessed applicability of Cold Nuclear Fusion (1989-1990) for DoD; served in advisory role with regard to associated developments (1989-2000).  He has published over 50 scientific papers in various areas of physics including a number in the area of cold fusion.

 

 

 

 

The Strange Birth of the Water Fuel Age:  the Cold Fusion “miracle” was no Mistake

 

Dr. Mallove’s briefing paper, which was submitted on request from the White House for President Clinton (Feb, 2000) was not available at the time of this compilation, but may be obtained from him at: Infinite Energy Magazine, P.O. Box 2816, Concord, NH  03302, Phone: 603-228-4516.

The Author

Dr.  Eugene Mallove is the Editor-in Chief and Publisher of Infinite Energy magazine and has degrees in Aeronautical and Astronautical Engineering and in Environmental Health Sciences.  He is the Director of the New Energy Research Laboratory in NH.  He has published numerous papers in the energy area and is an leading expert on research in cold fusion.  His book, “Fire from Ice: Searching for the Truth Behind Cold Fusion Furor” was nominated for the Pulitzer Prize.


The Unnecessary Energy Crisis: How to Solve It Quickly

 

T. E. Bearden, LTC, U.S. Army (Retired)

CEO, CTEC Inc.

Director, Association of Distinguished American Scientists (ADAS)

Fellow Emeritus, Alpha Foundation's Institute for Advanced Study (AIAS)

Final Draft

June 24, 2000

 

Introduction

 

The World Energy Crisis

The world energy crisis is now driving the economies of the world nations. 

There is an escalating worldwide demand for electrical power and transportation, much of which depends on fossil fuels and particularly oil or oil products.  The resulting demand for oil is expected to increase year by year. Recent sharp rises in some U.S. metropolitan areas included gasoline at more than $2.50 per gallon already.

At the same time, it appears that world availability of oil may have peaked in early 2000, if one factors in the suspected Arab inflation of reported oil reserves.  From now on, it appears that oil availability will steadily decline, slowly at first but then at an increasing pace.

Additives to aid clean burning of gasoline are also required in several U.S. metropolitan areas, increasing costs and refinery storage and handling.

The increasing disparity between demand and supply—steadily increasing demand for electricity using oil products versus decreasing world supplies of oil, with other factors such as required fuel additives—produces a dramatically increasing cost of oil and oil products.  Further, newer supplies of oil must be taken by increasingly more expensive production means.

Manipulative means of influencing the price of oil include (i) the ability of OPEC to increase or decrease production at will, and (ii) the ability of the large oil companies to reduce or increase the holding storage of the various oil products, types of fuel, etc.  Interestingly, several large oil companies are reporting record profits {[1]}.

At the same time, the burgeoning populaces of the major petroleum producers—and their increasing economic needs—press hard for an increasing inflation of oil prices in order to fund the economic benefits.

As an example, Saudi moderation of OPEC is vanishing or has already vanished.  The increasing demands of the expanding Saudi Royal Family group and the guaranteed benefits to the expanding populace have overtaken and surpassed the present Saudi financial resources unless the price of OPEC oil is raised commensurately {[2]}.

The Federal Reserve contributes directly to the economic problem in the U.S., since it interprets the escalating prices of goods and services (due to escalating energy prices) as evidence of inflation.  It will continue to raise interest rates to damp the economy, further damping U.S. business, employment, and trade.  The Fed has already increased interest rates six times in one year as of this date.

International Trade Factors

Under NAFTA, GATT {[3]}, and other trade agreements, the transfer of production and manufacturing to the emerging nations is also increasing and trade barriers are lowered.  Some 160 emerging nations are essentially exempt from environmental pollution controls, under the Kyoto accords.  In these nations, electrical power needs and transport needs are increasing, and will continue to increase, due to the increasing production and movement of goods and the building of factories and assembly plants.  Very limited pollution controls—if any—will be applied to the new electrical plants and transport capabilities to be built in those exempted nations. 

The transfer of manufacturing and production to many of these nations is a transfer to essentially "slave labor" nations.   Workers have few if any benefits, are paid extremely low wages, work long hours, and have no unions or bargaining rights.  In some of these nations, to pay off their debts many parents sell their children into bondage for manufacture of goods, with 12 to 14 hour workdays being a norm for the children {[4]}.  In such regions the local politicians can usually be "bought" very cheaply so that there are also no effective government controls.  Such means have set up a de facto return to the feudalistic capitalism of an earlier era when enormous profits could be and were extracted from the backs of impoverished workers, and government checks and balances were nil.

The personal view of this author is that NAFTA, GATT, and Kyoto were set in place for this very purpose.  As the transfer builds for the next 50 years, it involves the extraction of perhaps $2 trillion per year, from the backs of these impoverished laborers.  It would not appear accidental that Kyoto removed the costly pollution control measures from this giant economic buildup that would otherwise have been required.  The result will be increased pollution of the biosphere on a grand scale.

Ironically, the Environmental Community itself was deceived into supporting the Kyoto accords and helping achieve them, hoping to put controls on biospheric pollution worldwide.  In fact, the Kyoto accords will have exactly the opposite effect.

Resulting World Economic Collapse

Bluntly, we foresee these factors—and others {[5]}{[6]} not covered—converging to a catastrophic collapse of the world economy in about eight years.  As the collapse of the Western economies nears, one may expect catastrophic stress on the 160 developing nations as the developed nations are forced to dramatically curtail orders.

International Strategic Threat Aspects

History bears out that desperate nations take desperate actions.  Prior to the final economic collapse, the stress on nations will have increased the intensity and number of their conflicts, to the point where the arsenals of weapons of mass destruction (WMD) now possessed by some 25 nations, are almost certain to be released.  As an example, suppose a starving North Korea {[7]} launches nuclear weapons upon Japan and South Korea, including U.S. forces there, in a spasmodic suicidal response.  Or suppose a desperate China—whose long-range nuclear missiles (some) can reach the United States—attacks Taiwan.  In addition to immediate responses, the mutual treaties involved in such scenarios will quickly draw other nations into the conflict, escalating it significantly.

Strategic nuclear studies have shown for decades that, under such extreme stress conditions, once a few nukes are launched, adversaries and potential adversaries are then compelled to launch on perception of preparations by one's adversary.  The real legacy of the MAD concept is this side of the MAD coin that is almost never discussed.  Without effective defense, the only chance a nation has to survive at all is to launch immediate full-bore pre-emptive strikes and try to take out its perceived foes as rapidly and massively as possible.

As the studies showed, rapid escalation to full WMD exchange occurs.  Today, a great percent of the WMD arsenals that will be unleashed, are already on site within the United States itself {[8]}.  The resulting great Armageddon will destroy civilization as we know it, and perhaps most of the biosphere, at least for many decades.

My personal estimate is that, beginning about 2007, on our present energy course we will have reached an 80% probability of this "final destruction of civilization itself" scenario occurring at any time, with the probability slowly increasing as time passes.  One may argue about the timing, slide the dates a year or two, etc., but the basic premise and general time frame holds.  We face not only a world economic crisis, but also a world destruction crisis.

So unless we dramatically and quickly solve the energy crisis—rapidly replacing a substantial part of the "electrical power derived from oil" by "electrical power freely derived from the vacuum"—we are going to incur the final "Great Armageddon" the nations of the world have been fearing for so long.  I personally regard this as the greatest strategic threat of all times—to the United States, the Western World, all the rest of the nations of the world, and civilization itself {[9]} {[10]}.

What Is Required to Solve the Problem?

To avoid the impending collapse of the world economy and/or the destruction of civilization and the biosphere, we must quickly replace much of the "electrical energy from oil" heart of the crisis at great speed, and simultaneously replace a significant part of the "transportation using oil products" factor also. Such replacement by clean, nonpolluting electrical energy from the vacuum will also solve much of the present pollution of the biosphere by the products of hydrocarbon combustion.  Not only does it solve the energy crisis, but it also solves much of the environmental pollution problem.

The technical basis for that solution and a part of the prototype technology required, are now at hand.  We discuss that solution in this paper.

To finish the task in time, the Government must be galvanized into a new Manhattan Project {[11]} to rapidly complete the new system hardware developments and deploy the technology worldwide at an immense pace.

Once the technology hardware solutions are ready for mass production, even with a massive worldwide deployment effort some five years are required to deploy the new systems sufficiently to contain the problem of world economic collapse.  This means that, by the end of 2003, those hardware technology solutions must have been completed, and the production replacement power systems must be ready to roll off the assembly lines en masse.

The 2003 date appears to be the critical "point of no return" for the survival of civilization as we have known it.

Reaching that point, say, in 2005 or 2006 will not solve the crisis in time.  The collapse of the world economy as well as the destruction of civilization and the biosphere will still almost certainly occur, even with the solutions in hand.

A review of the present scientific and technical energy efforts to blunt these strategic threat curves, immediately shows that all the efforts (and indeed the conventional scientific thinking) are far too little and far too late.  Even with a massive effort on all of the "wish list" of conventional projects and directions, the results would be insufficient to prevent the coming holocaust. 

As one example, the entire hot fusion effort has a zero probability of contributing anything of significance to the energy solution in the time frame necessary.  Neither will windmills, more dams, oil from tar sands, biofuels, solar cells, fuel cells, methane from the ocean bottom, ocean-wave-powered generators, more efficient hydrocarbon combustion, flywheel energy storage systems, etc.  All of those projects are understandable and "nice", but they have absolutely zero probability of solving the problem and preventing the coming world economic collapse and Armageddon.

Those conventional approaches are all "in the box" thinking, applied to a completely "out of the box" problem unique in world history.

The conventional energy efforts and thinking may be characterized as essentially "business as usual but maybe hurry a little bit."  They divert resources, time, effort, and funding into commendable areas, but areas which will not and cannot solve the problem. In that sense, they also contribute to the final Armageddon that is hurtling toward us {[12]}.

If we continue conventionally and with the received scientific view, even with massively increased efforts and a Manhattan Project, we almost certainly guarantee the destruction of civilization as we know it, and much of the biosphere as well.

Bluntly, the only viable option is to rapidly develop systems which extract energy directly from the vacuum and are therefore self-powering, like a windmill in the wind {[13]}.  Fortunately, analogous electrical systems—open systems far from thermodynamic equilibrium in their exchange with the active vacuum—are permitted by the laws of physics, electrodynamics {[14]} and thermodynamics {[15]}.  Such electrical systems are also permitted by Maxwell's equations, prior to their arbitrary curtailment by Lorentz symmetrical regauging {[16]} {[17]} {20}. 

The good news was that the little mathematical trick by Lorentz made the resulting equations much easier to solve (for the selected "subset" of the Maxwell-Heaviside systems retained).

However, the bad news is that it also just arbitrarily discarded all Maxwellian EM systems far from thermodynamic equilibrium (i.e., asymmetrical and in disequilibrium) with respect to their vacuum energy exchange.

So the bad news is that Lorentz arbitrarily discarded all the permissible electrical power systems analogous to a windmill in a wind, and capable of powering themselves and their loads.  All our energy scientists and engineers continue to blindly develop only Lorentz-limited electrical power systems.

The good news is that we now know how to easily initiate continuous and powerful "electromagnetic energy winds" from the vacuum at will.  Once initiated, each free EM energy wind flows continuously so long as the simple initiator is not deliberately destroyed.

The bad news is that all our present electrical power systems are designed and developed so that they continually kill their "energy winds" from the vacuum faster than they can collect some of the energy from the winds and use it to power their loads.

But the good news is that we now know how to go about designing and developing electrical power systems which (i) initiate copious EM energy flow "winds" in the vacuum, (ii) do not destroy these winds but let them continue to freely flow, and (iii) utilize these freely-flowing energy winds to power themselves and their loads.

So we have already solved the first half of the energy crisis problem {[18]} {[19]}: We can  produce the necessary "EM energy wind flow" in any amount required, whenever and wherever we wish, for peanuts and with ridiculous ease.  We can insure that, once initiated, the electromagnetic energy wind flows indefinitely or until we wish to shut it off.

A tiny part of the far frontier of the scientific community is also now pushing hard into catching and using this available EM energy from the vacuum {[20]}.  However, they are completely unfunded and working under extremely difficult conditions {[21]}.

In addition, there are more than a dozen appropriate processes already available (some are well-known in the hard literature), which can be developed to produce the new types of electrical energy systems {[22]}.

What Must Be Done Technically

We have about two and a half years to develop several different types of systems for the several required major applications—and particularly the following:

(1)   self-powering open electrical power systems extracting their electrical energy directly from the active vacuum and readily scalable in size and output,

(2)   burner systems {[23]} to replace the present "heater" elements of conventional power plants, increasing the coefficient of performance (COP) {[24]} of those altered systems to COP>1.0, and perhaps to COP = 4.0,

(3)   specialized self-powering engines to replace small combustion engines {[25]},

(4)   self-regenerating, battery-powered systems enabling practical electric automobiles, based on the Bedini {[26]} process,

(5)   Kawai COP>1.0 magnetic motors {[27]}with clamped feedback, powering themselves and their loads,

(6)   magnetic Wankel engines {[28]} with small self-powering batteries, which enable a very practical self-powering automotive engine unit for direct replacement in present automobiles,

(7)   permanent magnet motors such as the Johnson {[29]} approach using self-initiated exchange force pulses {[30]} in nonlinear magnetic materials to provide a nonconservative field, hence a self-powering unit,

(8)   iterative retroreflective EM energy flow systems which intercept and utilize significant amounts of the enormous Heaviside dark energy {[31]} which surrounds every electrical circuit but is presently ignored,

(9)   Iterative phase conjugate retroreflective systems which passively recover and reorder the scattered energy dissipated from the load, and reuse the energy again and again {[32]},

(10)Shoulders' charge cluster devices {[33]} which yield COP>1.0 by actual measurement,

(11)self-exciting systems using intensely scattering optically active media and iterative asymmetrical self-regauging {[34]}{[35]}{[36]} {67},

(12)true negative resistors such as the Kron {[37]} and Chung {[38]} negative resistors, the original point-contact transistor {[39]} which can be made into a negative resistor, and the Fogal negative resistor semiconductor, and

(13)overunity transformers using a negative resistor bypass across the secondary, reducing the back-coupling from secondary to primary and thus lowering the dissipation of energy in the primary {[40]}.

What Must Be Done for Management and Organization

To meet the critical 2003 "point of no return" milestone, the work must be accomplished under a declared National Emergency and a Presidential Decision Directive. 

The work must be amply funded, with authority—because of the extreme emergency—to utilize any available patented processes and devices capable of being developed and deployed in time, with accounting and compensation of the inventors and owners separately. 

As an example, two of the above mentioned devices—the Kawai engine and the magnetic Wankel engine—can be quickly developed and produced en masse.  However, they have been seized by the Japanese Yakuza {[41]} {[42]} {[43]} and are being held off the world market.  The two devices are quite practical and can be developed and manufactured with great rapidity.  As an example, two models of the Kawai engine were tested by Hitachi to exhibit COP = 1.4 and COP = 1.6 respectively.  Use of these two inventions, under U.S. Government auspices, will greatly contribute to solving a significant portion of the transportation power problem, at low risk for this part of the solution.  Use of them cannot be obtained by normal civil means, due to the involvement of the Yakuza.

The technical part of the project to solve the energy crisis is doable in the required time—but just barely, and only if we move at utmost speed. 

Thanks to more than 20 years work on unconventional solutions to the problem, much of the required solution is already in hand, and the project can go forward at top speed from the outset.

The remaining managing and organizing problem is to marshal the necessary great new Manhattan Project as a U.S. government project operating under highest national priority and ample funding.  The Project must be a separate Agency, operating directly under the appropriate Department Secretary and reporting directly to the President (through the Secretary) and to a designated Joint Committee of the Senate and the House.

The selection of the managers and directors must be done with utmost care; else, they themselves will become the problem rather than the solution.  We strongly stress that here even the most highly qualified managerial scientist may have to be disqualified because of his or her own personal biases and dogmatic beliefs.  Leaders and scientists are required who will run with the COP>1.0 ball on a wide front.

The compelling authority to assign individual tasks to the National Laboratories and other government agencies is required, but under no circumstances can the project be placed under the control of the national laboratories themselves.  Those laboratories such as Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Oak Ridge National Laboratory are far too committed to their entrenched Big Science projects and the resulting bias against electrical energy from the vacuum.

Assigning management of the project to them would be setting the foxes to minding the hen house, and would guarantee failure.  Those agencies whose favored approaches are responsible for the present energy crisis, cannot be expected to direct an effective solution to it that is outside their managerial and scientific ansatz and totally against their institutional and professional biases.  If they are allowed to direct the project, then implacable scientists, who adamantly oppose electrical energy from the vacuum from the getgo, will hamstring and destroy the project from its inception.

Not only will they fiddle while Rome burns, but they will help burn it.

Enormous EM Energy Flow Is Easily Extracted From the Active Vacuum

            At any point and at any time, one can freely and inexpensively extract enormous EM energy flows directly from the active vacuum itself.

There is not now and there never has been a problem in readily obtaining as much electromagnetic energy flow from the vacuum as we wish.  Anywhere.  Anytime. For peanuts.

Every electrical power system and circuit ever built already does precisely that {[44]}{[45]}.  But almost all the vast EM energy flow that the present flawed systems extract from the vacuum is unaccounted and simply wasted.  It is wasted by the conventional, seriously flawed circuits and systems designed and built by our power system scientists and engineers in accord with a terribly flawed 136-year old set of electrodynamics concepts and foundations.  Specifically, it is wasted because Lorentz discarded it a century ago {45}.  Since then, everyone has blindly followed Lorentz's lead.

Our electrical scientists and engineers have not yet even discovered how a circuit is powered!  They have no valid concept of where the electrical energy flowing down the power line actually comes from.  They do not model the interaction that provides it {[46]}, in their theoretical models and equations.  This vast scientific "conspiracy of ignorance" is completely inexplicable, because the actual source of the EM energy powering the external circuits has been known (and rigorously proven) in particle physics for nearly half a century!  However, it has not yet even been added into the fundamental electrical theory used in designing and building power systems.

We have a scientific mindset problem of epic proportions, and scientific negligence and electromagnetics dogma of epic proportions.   I sometimes refer to this as an unwitting "conspiracy of ignorance", where I use the word "ignorance" technically as meaning "unaware".  We certainly do not intend the phrase to be pejorative.

So we do not have an energy problem per se.  We have an unwitting conspiracy of scientific ignorance problem.

Because of its bias, our electrical scientific community also strongly resists updating the 136-year old electrodynamics foundations even though much of it is known to be seriously flawed and even incorrect {[47]} {[48]}. Indeed, organized science has always fiercely resisted strong innovation.  As Max Planck {[49]} so eloquently put it,

"An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul.  What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning."

Arthur C. Clarke {[50]} expressed it succinctly for our more modern scientific community, as follows:

"If they [quantum fluctuations of vacuum] can be [tapped], the impact upon our civilization will be incalculable.  Oil, coal, nuclear, hydropower, would become obsolete—and so would many of our worries about environmental pollution."  "Don't sell your oil shares yet—but don't be surprised if the world again witnesses the four stages of response to any new and revolutionary development: 1. It's crazy! 2. It may be possible—so what?  3. I said it was a good idea all along.  4. I thought of it first."

With respect to extracting and using EM energy from the vacuum, our present scientific community is mostly in Clarke's phase 1.  A few scientists are in phase 2 but surmise that "it may perhaps be the science of the next century."

We do not have a century remaining.  We have two and a half years.

For nearly half a century (i) the active vacuum, (ii) the vacuum's energetic interaction with every dipole, and (iii) the broken symmetry of the dipole {[51]} in that energetic interaction {55} have been known and proven in particle physics. These proven COP>1.0 vacuum energy mechanisms have not been incorporated into the electrodynamic theory used to design and build electrical power and transportation systems {[52]}.  We are still waiting for the "old scientific opponents"—adamantly opposed to the very notion of electrical energy from the vacuum—to "die off and get out of the way."

Hence our universities, the National Science Foundation, the National Academy of Science, the National Laboratories, etc. have not taken advantage of the enormous EM energy so universally available from the active vacuum, and in fact universally and copiously extracted from the vacuum by every EM system today—and wasted.  Indeed, present organized science will not fund and will not tolerate research that would violate the presently decreed view of power systems and their functioning.

Hence, our present organized scientific community will strongly resist funding of a vigorous program to gather all this proven, known physics together and rapidly use it to change and update (modernize) the terribly flawed EM theory and the design of electrical power systems.  Most scientists attempting to do this research have had to proceed on their own.  They have undergone vicious and continual ad hominem attacks, lost research funds and tenure, been unable to get their papers published, and in fact risked being destroyed by the scientific community itself {21}.

The bottom line is this: Left to sweet reason, because of the depth of its present bias the scientific community is totally incapable of reacting to the problem in time to prevent the destruction of civilization.  If we wish to survive, government will have to directly force the scientific community to do the job, over careers and "dead bodies" (so to speak) if necessary.

But first the government itself must be motivated to do so.

Only the environmental community has the clout, financial resources, and activists to motivate the government in the extremely short time in which it must be accomplished.  So it would seem that the most urgent task is to educate and wake up the environmental community.  It has been "had", and it has been "had" since the beginning.

Understanding What Powers Electrical Circuits

Let us cut through the scientific errors in how electrical power systems are presently viewed: Batteries and generators themselves do not power circuits.  They never have, and they never will.  They dissipate their available internal energy {[53]} to do one thing and one thing only: forcibly separate their own internal charges to form a "source dipole" {[54]}.  Once the dipole has been formed, the dipole directly extracts electromagnetic energy from the active vacuum {[55]}, pouring the extracted EM energy out from the terminals of the battery or generator.

            Batteries and generators make a dipole, nothing else.  All the fuel every burned, the nuclear fuel rods ever consumed, and chemical energy ever expended by batteries, did nothing but make dipoles.  None of all that destructive activity, of itself, ever added a single watt to the power line.

Once made, the dipole then extracts EM energy from the seething vacuum, and pours it out down the circuit and through all surrounding space around the circuit {56}.  A little bit of that energy flow strikes the circuit and enters it by being deflected (diverged) into the wires {57}.  That tiny bit of intercepted energy flow that is diverged into the circuit, then powers the circuit (its loads and losses) {58}.

All the rest of that huge energy flow around the circuit just roars on off into deep space and is wasted.

The Dipole Extracts Enormous Energy from the Vacuum

The outflow of EM energy extracted from the vacuum by a small dipole is enormous.  It fills all space surrounding the attached external circuit (e.g., surrounding the power lines attached to a power plant generator) {[56]}.  In the attached circuits, the electrical charges on the surfaces of the wires are struck by the mere edge of the violent flow of EM energy passing along those surfaces.  The resulting tiny "intercepted" part {[57]} of the EM energy flow is deflected into the wires, very much like placing one's hand outside a moving automobile and diverting some of the wind into the car.  The deflected energy that enters the wires is the Poynting component of the energy flow.  It is not the entire EM energy flow by any means, but only a very, very tiny component of it {[58]}.

Only that tiny bit of the energy flow that is actually diverged into the wires is used to power the circuit and the loads.  All the rest of the enormous energy flow present and available outside the circuit is just ignored and wasted.

A nominal 1-watt generator, e.g., is actually one whose external circuit can "catch" only one watt of its output.  The generator's actual total output—in the great flow which fills all space around the external circuit and is not intercepted and used—is something on the order of 10 trillion watts!

Our Scientists and Engineers Design Dipole-Destroying Systems

Here is the most inane thing of all.  Precisely half of the small amount of energy that is actually caught by the circuit is used to destroy the dipole!  That half of the intercepted energy does not power the load, nor does it power losses in the external circuit.  Instead, it is used to directly scatter the dipole charges and destroy the dipole.

Our scientists and engineers have given us the ubiquitous closed current loop circuit {[59]}, which destroys the dipole faster than it powers the load.  In short, the scientists and engineers design and build only those electrical power systems that "continuously commit suicide" by continuously destroying the source dipole that is extracting the vacuum energy and emitting it out along the circuit to power everything in the first place.

So now, we have the real picture. 

Every electrical load ever powered, and every load powered today, has been and is powered by electromagnetic energy extracted directly from the seething vacuum by the source dipole in the generator or battery.

However, our scientists and engineers design and build electrical power systems that only intercept and use a tiny fraction of the vast EM energy flow available.  They also only design and build systems that destroy their source dipole faster than they power their loads.

If one does not destroy the dipole once it is made, it will continue to freely extract copious EM energy flow from the vacuum, indefinitely, pouring out a stupendous flow of EM energy.

As an example, dipoles in the original matter formed in the Big Bang at the beginning of the universe have been steadily extracting EM energy from the vacuum and pouring it out for about 15 billion years.

The energy problem is not due to the inability to produce copious EM energy flows at will—as much as one wishes, anywhere, anytime.  Every dipole already does this, including in every EM power system ever built.

The energy problem is due to the complete failure to (i) intercept and utilize more of the vast energy flows made available by the common dipole, and (ii) doing so without using the present inanely designed circuits.  These circuits use half their collected energy to destroy the dipole that is extracting the energy flow from the vacuum in the first place!

This is part of the "conspiracy of scientific ignorance" earlier mentioned.

Ignoring the Vacuum as the Source of Electrical Energy in All Circuits

In their conventional theoretical models, our present electrical power system scientists and engineers do not even include the vacuum interaction or the dipole's extraction of EM energy from the vacuum.  They simply ignore—and do not model—what is really powering every electrical system they build.

Consequently, we reiterate that our electrical scientists have never even discovered how an EM circuit is powered—although it has been discovered and known for nearly 50 years in particle physics.

All the hydrocarbons ever burned, all the water over all the dams ever built, all the nuclear fuel rods ever expended in all the nuclear power plants, added not a single watt to the power line.

Instead, all that expense, effort, and pollution and destruction of the biosphere was and is necessary in order to keep adding internal energy to the generator—so that it can keep continually rebuilding its source dipole that is continually destroyed by the inane circuits that the power system scientists and engineers keep designing and building for us.

It takes as much energy input to the generator to restore the dipole, as it took the circuit to destroy the dipole.  Thus all the systems our scientists and engineers design and build, require that we continually input more energy to restore the dipole, than the circuit dissipates in the load.

Our technical folks thus happily design and give us systems which can and will only exhibit COP<1.0—thus continuing to require that we ourselves steadily provide more energy to the system to continually rebuild its dipole, than the inane masochistic system uses to power its load.

In short, we pay the power companies (and their scientists and engineers) to deliberately engage in a giant wrestling match inside their generators and lose.

That is not the way to run the railroad!  One is reminded of one of the classic comments by Churchill:

“Most men occasionally stumble over the truth, but most pick themselves up and continue on as if nothing had happened.”

It seems that not very many energy system scientists and engineers have "stumbled over the truth" as to what really powers their systems, and how inanely they are really designing them.

Electrical Energy Required from Hydrocarbon Burning Drives the Problem

            The heart of the present environmental pollution problem is the ever-increasing need for electrical energy obtained from burning of hydrocarbon fuels and/or nuclear power stations.

            The increasing production of electrical power to fill the rising needs, increasingly pollutes the environment including the populace itself (lungs, bodies, etc.).  Almost every species on earth is affected, and as a result every year some species become extinct.

Environmental pollution includes pollution of the soil, fresh and salt water, and the atmosphere by a variety of waste products.  Given global warming, it also includes excess heat pollution in addition to chemical and nuclear residues.

            Under present procedures, the electrical energy problem is exacerbated by decreasing available oil supplies, which are believed to have peaked this year, with a projected decline from now on.

            But really, the electrical energy problem is due to the scientific community's adamant defense and use of electrical power system models and theories that are 136 years old {[60]} in their very foundations.  These models and theories are riddled with errors and non sequiturs, and seriously flawed.

The scientific community has not even recognized the problem, much less the solution.  In fact, it does not even intend to recognize the problem, even though the basis for it has been known in particle physics for nearly 50 years.  As Bunge {[61]} put it some decades ago:

"...it is not usually acknowledged that electrodynamics, both classical and quantal, are in a sad state."

            The scientific community has done little to correct that fundamental problem since Bunge made his wry statement.

            Let us put it very simply: The most modern theory today is modern gauge field theory.  In that theory, freedom of gauge is assumed from the getgo.  Applied to electrodynamics, this means—as all electrodynamicists have assumed for the last century or longer—that the potential energy of an EM system can be freely changed at will.  In other words, in theory it costs nothing at all to increase the EM energy collected in a system; this is merely "changing the voltage", which does not require power.  In other words, we can "excite" the system with excess energy (actually taken from the vacuum), at will.  For free.  And the best science of the day agrees with that statement.

            It also follows that we can freely change the excitation energy again, at will.  In short, we can dissipate that excess energy freely and at will.  Without cost.

            Well, this means that we are free—by the laws of nature, physics, thermodynamics, and gauge field theory—to dissipate that free excess potential energy in an external load, thus doing "free work".

            Since none of the systems our energy scientists and engineers build for us are doing that, it follows a priori that the fault lies entirely in their own system design and building.  It does not lie in any prohibition by nature or the laws of physics.

            A priori, then, the present COP<1.0 performance of our electrical power systems is a monstrosity and the direct fault of our scientists and engineers.  We cannot blame the laws of nature or the laws of physics.

            The present energy crisis then is due totally to that "conspiracy of ignorance" we referred to.  It is maintained by the scientific community today, and it has been maintained by it for more than 100 years.

            This is the real situation that the environmentalists must become aware of, if they are to see the correct path into which their energies and efforts should be directed—to solve both the energy crisis and the problem of gigantic pollution of the biosphere.

Outside Intervention Must Forcibly Move Energy Science Forward

Unless outside intervention occurs forcibly, the scientific community's lock-up of research funds for "in the box" energy research may result in the economic collapse of the Western World in perhaps as little as eight years.

Let us examine the gist of the problem facing us.

Suppose we launch a crash program to develop, manufacture, deploy, and employ the new "vacuum powered" systems.  Once the new self-powering systems are developed and ready to roll off the production lines en masse, it will require a minimum of five years worldwide to sufficiently alter the "electrical energy from oil" demand curve, so that economic collapse can be averted.  In turn, this means that the new systems must be ready to roll off the manufacturing lines by the end of 2003.  While this is a very tight schedule, it can be done if we move rapidly.

            The necessary scientific corrections along the lines indicated in this paper can be quickly applied to solve the electrical energy problem permanently and economically, given a Manhattan type project under a Presidential Decision Directive together with a Presidential declaration of a National Energy Emergency.

            In a paper {[62]} to be published in Russia in July 2000, this researcher has proposed some 15 viable methods for developing new "self-powering" systems powering themselves and their loads with energy extracted from the vacuum.  Several of these systems can be developed very rapidly, and can be easily mass-produced. 

A second paper {[63]} will be published in the same proceedings, revealing the Bedini method for invoking a negative resistor inside a storage battery.  The negative resistor freely extracts vacuum energy and adds it to both the battery-recharging function and the load powering function. 

In Bedini's negative resistor method, the ion current inside the battery is decoupled (dephased) from the electron current between the outer circuit and the external surfaces of the battery plates.  This allows the battery to be charged (with increased charging energy) simultaneously as the load is powered with increased current and voltage.

At my specific request, both papers were thoroughly reviewed by qualified Russian scientists, and the premises passed successfully.

A third paper {[64]} gives the exact giant negentropy mechanism by which the dipole extracts such enormous energy from the vacuum.  We will further explain that mechanism below.

Conventional Approaches: Too Little, Too Late

            It appears that the Environmental Community itself has finally realized that the present scientific approaches and research are simply too little and too late.  Further, the conventional approaches are largely "in the box thinking" applied to an "out of the box problem."  We leave it to others such as Loder {[65]} to succinctly summarize the shortfalls of these present solutions.  Loder, e.g., particularly and incisively explains how the problem with automobiles breaks down.

            In fact, no single COP>1.0 approach will be all sufficing.  Several solutions, each for a different application, must be developed and deployed simultaneously.

As an example, it is possible to create certain dipolar phenomena in plasmas produced in special burners, such that the dipoles extract substantial excess EM energy from the vacuum.  Output of the excess energy produces ordinary excess heat well beyond what the combustion process alone will yield.  Given a Manhattan type project, the inventor of that process (with already working models and rigorous measurements) could rapidly be augmented to develop a series of replacement burners (heaters).  They could be used in existing electrical power plants to heat the water to make the steam for the steam turbines turning the shafts of the generators.  The entire remainder of the power system, grid, etc. could be left intact.  Some fuel would still be burned, but far less would be consumed in order to furnish the same required heat output. 

In short, a rather dramatic reduction in power plant hydrocarbon combustion could be achieved—in the present electrical power plants with minimum modification, and in the necessary time frame—while maintaining or even increasing the electrical energy output of the power systems.  We believe the inventor would fully participate in a government-backed Manhattan type energy program where a National Emergency has been declared, given a U.S. government guarantee that his process, equipment, and inventions will not be confiscated {[66]}.

Another process capable of quick development and enormous application is the development of point contact transistors as true negative resistors {39}.

Two other processes that can be developed for massive production in less than two years are (i) the Kawai process {27}, and (ii) the magnetic Wankel process {28}.  In addition, the Johnson {29} process can be developed and readied for manufacture in the same time frame, given a full-bore sophisticated laboratory team.

There are other processes {[67]} {62} {63} which can also be developed rapidly, to provide major contributions in solving their parts of the present "electrical energy from hydrocarbon combustion" problem.

Giant Negentropy and a Great New Symmetry Principle

            We now summarize some recent technical discoveries by the present author that bear directly upon the problem of extracting and using copious EM energy flows from the vacuum.

            Any dipole has a scalar potential between its ends, as is well known.  Extending earlier work by Stoney {[68]}, in 1903 Whittaker  {[69]} showed that the scalar potential decomposes into—and identically is—a harmonic set of bidirectional longitudinal EM wavepairs.  Each wavepair is comprised of a longitudinal EM wave (LEMW) and its phase conjugate LEMW replica.  Hence, the formation of the dipole actually initiates the ongoing production of a harmonic set of such biwaves in 4-space {[70]}.

We separate the Whittaker waves into two sets: (i) the convergent phase conjugate set, in the imaginary plane, and (ii) the divergent real wave set, in 3-space.  In 4-space, the 4th dimension may be taken as -ict.  The only variable in -ict is t.  Hence the phase conjugate waveset in the scalar potential's decomposition is a set of harmonic EM waves converging upon the dipole in the time dimension, as a time-reversed EM energy flow structure inside the structure of time {[71]}.  Or, one can just think of the waveset as converging upon the dipole in the imaginary plane {[72]}—a concept similar to the notion of "reactive power" in electrical engineering.

The divergent real EM waveset in the scalar potential's decomposition is then a harmonic set of EM waves radiating out from the dipole in all directions at the speed of light.  As can be seen, there is perfect 4-symmetry in the resulting EM energy flows, but there is broken 3-symmetry since there is no observable 3-flow EM energy input to the dipole.

Our professors have taught us that output energy flow in 3-space from a source or transducer, must be accompanied by an input energy flow in 3-space.  That is not true.  It must be accompanied by an input energy flow, period.  That input can be an energy flow in the 4th dimension, time—or we can consider it as an inflow in the imaginary plane.  The flow of energy must be conserved, not the dimensions in which the flow exists.  There is no requirement by nature that the inflow of EM energy must be in the same dimension as the outflow of EM energy.

Indeed, nature prefers to do it the other way!  Simply untie nature's foot from the usually enforced extra condition of 3-space energy flow conservation.  Then nature joyfully and immediately sets up a giant 4-flow conservation, ongoing.  Enormous EM energy is inflowing from the imaginary plane into the source charge or dipole, and is flowing out of the source charge or dipole in 3-space, at the speed of light, and in all directions.

In other words, nature then gladly gives us as much EM energy flow as we need, indefinitely—just for paying a tiny little bit initially to "make the little dipole."  After that, we never have to pay anything again, and nature will happily keep on pouring out that 3-flow of EM energy for us.  This is the giant negentropy mechanism I uncovered, performed in the simplest way imaginable: just make an ordinary little dipole.

            We may interpret the giant negentropy mechanism in electrical engineering terms {[73]}. The EM energy flow in the imaginary plane is just incoming "pure reactive power" in the language of electrical engineering.  The outgoing EM energy flow in the real plane (3-space) is "real power" in the same language.  So the dipole is continuously receiving a steady stream of reactive power, transducing it into real power, and outputting it as a continuous outflow of real EM power.

Further, there is perfect 1:1 correlation between the convergent waveset in the imaginary plane and the divergent waveset in 3-space.  This perfect correlation between the two sets of waves and their dynamics represents a deterministic re-ordering of a fraction of the 4-vacuum energy.  This re-ordering initiated by the formation of the dipole spreads radially outward at the speed of light, continuously.

            This clearly shows that (i) we can initiate reordering of a usable fraction of the vacuum's energy at any place, anytime, easily and cheaply (we need only to form a simple dipole), and (ii) the process continues indefinitely, so long as the dipole exists, without the operator inputting a single additional watt of power.

            This is a very great benefit.  So long as the dipole exists, this re-ordering continues and  a copious flow of observable, usable EM energy pours from the dipole in all directions at the speed of light.

            This is the full solution to the first half of the energy crisis, once and for all.

Ansatz of the Major Players

            To appreciate the difficulty in implementing the solution to the energy crisis, one must be aware of the characteristics of the major communities whose dynamics and interactions determine the outcome.  Accordingly, we summarize our personal assessment of the present "status" and "awareness" of the various communities involved.  We do that by attempting to express the overall "ansatz" of the specific community.

Scientific Community

            For the most part, the organized scientific community varies from highly resistant to openly hostile toward any mention of extracting copious EM energy from the active vacuum.  The "Big Nuclear" part of the community is particularly adamant in this respect, as witness its ferocious onslaught on the fledgling and struggling cold fusion researchers—a ferocity of scientific attack seldom seen in the annals of science {[74]} {[75]}. 

The scientific community also largely suppresses {[76]} or severely badgers scientists attempting to advance electrodynamics to a more modern model, suitable to the needs of the 21st century and the desperate need for cheap, clean, nonpolluting electrical power worldwide {21}.  The community still applies classical equilibrium thermodynamics to the electrical part of all its electrical power systems, even though every EM system is inherently a system far from equilibrium with the active vacuum environment, and a different thermodynamics applies.  Only if the system is specifically so designed—e.g., so that during the dissipation of its excitation energy it enforces the Lorentz symmetrical regauging condition—will the system behave as a classical equilibrium system.

The thermodynamics of open dissipative systems is well known {[77]}.  Such a system is permitted to (1) self-order, (2) self-oscillate or self-rotate, (3) output more energy than the operator inputs (the excess energy is freely received from the active environment), (4) power itself and its load simultaneously (all the energy is taken from the active environment, similar to a windmill's operation), and (5) exhibit negentropy.

            Our present electrical power systems do not do these five things, even though each is an open system in violent energy exchange with the vacuum.  A priori,  that reveals it is the scientific model and the engineering design that are at fault.

It is not any law of nature or principle of physics that prevents self-powering open electrical power systems.  Instead, it is the scientific community and its prevailing mindset against extracting and using EM energy from the vacuum.

 

Environmental Community

In the past, the environmental community has been overly naïve with respect to physics, and particularly with respect to electrical physics.  Its science advisors have come mostly from the conservative "in the box" scientific community.  Hence, the community has failed to realize that COP>1.0 electrical power systems are normal and permitted by the laws of nature and the laws of physics.  They have no inkling that Heaviside discovered—in the 1880s!—the enormous unaccounted EM energy pouring from the terminals of any battery or generator.  They are unaware that Poynting considered only the tiny component of the energy flow that enters the circuit.  They are also unaware that, completely unable to explain the astounding enormity of the EM energy flow if the nondiverged (nonintercepted) Heaviside component is accounted, Lorentz {18} just arbitrarily used a little procedure to discard that troublesome Heaviside "dark" (unaccounted) component. 

Lorentz reasoned that, since the huge dark energy flow component missed the circuit entirely, it "had no physical significance."  This is like arguing that none of the wind on the ocean has any physical significance, except for that small portion of the wind that strikes the sail of one's own sailboat.  It ignores the obvious fact that whole fleets of additional sailboats can also be powered by that "physically insignificant" wind component that misses one's own sailboat entirely.

Nonetheless, electrodynamicists continue to use Lorentz's little discard trick, and try to call the feeble Poynting energy flow component caught by the circuit the entire EM energy flow connected with it.  This is like arguing that the component of wind hitting the sails of one's own sailboat, is the entire great wind on the ocean.

            As a result, the environmental community has failed to grasp the technical reason for the energy crisis and the increasing pollution of the biosphere.  They have been deceived and manipulated into thinking that conventional organized science is giving them the very best technical advice possible on electrical power systems.  The environmentalists have been and are further deceived into believing that the conventional scientific community is advocating and performing the best possible scientific studies and developments for trying to solve the energy crisis.

            Of major importance, the environmental community itself has been deceived as to the exact nature of the energy flow in and around a circuit, the vastness of the unaccounted energy flow (or even that any of the energy flow is deliberately unaccounted), and the fact that this present but unaccounted EM energy flow can be intercepted and captured for use in powering loads and developing self-powering systems.

            Worst of all, the environmental community has been deceived as to what powers every electrical load and EM circuit.  They have been deceived into believing that burning all those hydrocarbons, using those nuclear fuel rods, building those dams and windmills, and putting out solar cell arrays are necessary and the best that can be done.  In short, they have been smoothly diverted from solving the very problem—the problem of the increasing pollution and destruction of the biosphere—they are striving to rectify.

            However, their continued demonstrations in the street demonstrate that many environmentalists now suspect that much of the world's continued policy of "the rich get richer and the poor get poorer" in international trade agreements are deliberately planned and implemented {[78]}.  They perceive the implementation to the advantage of a favored financial class and the exploitation of the poorer laboring classes in disadvantaged nations.

Electrical Power Community

The electrical power community:

(1)   ubiquitously uses equilibrium thermodynamics, believing that COP>1.0 is perpetual motion nonsense and against the laws of physics,

(2)   has no notion that the energy flowing down their power lines and filling all space around them, is extracted directly from the active vacuum by the source dipole in the generator,

(3)   erroneously believes that the hydrocarbons they burn, or the water through the hydroturbines at the dam, or the nuclear fuel rods they consume, actually add the power to the transmission lines,

(4)   uses half of the tiny component of energy caught by the power lines, to destroy the source dipoles in their generators, thus requiring ever more shaft input energy via powering a steam turbine, hydroturbine, etc.,

(5)   believes that energy can be "used" only once, when in fact it can be used and re-used repeatedly since it cannot be created or destroyed,

(6)   allows only a single pass of the EM energy flow down the power lines, so that only one tiny interception of energy occurs from the energy flow and the rest (most) of the energy flow is wasted,

(7)   believes that the electrical energy problem translates into more hydrocarbon combustion or nuclear fuel rod consumption rather than a totally different way of doing business, and

(8)   believes that the theory they apply is correct, when in fact it is so seriously flawed as to be inane, and has been inane for a century.

 

Industries also acquire their own hidden agendas, when serious threats to the industries arise. As an example, a potentially serious problem arose some decades ago when it became apparent that EM radiation from power lines might detrimentally affect people or at least some people.  To put it gently, a great deal of fuss and fury resulted, and a great deal of money was and is spent by the power companies (or through organizations and foundations funded by them) in EM bioeffects research.  Not too surprisingly, just about the entire output of this industry-funded research "finds" that there is no problem with powerline radiation {[79]}.  Those scientists such as Robert Becker {[80]} {[81]} who advocate or show otherwise, usually wind up having all their funds cut off, hounded from their jobs, and—in the case of Becker—forced to retire early.

It is no different in the electrical energy science field {21}.

Storage Battery Companies.

Battery companies are primarily of much the same outlook and ansatz as are the power companies.  They have gone to pulse charging of batteries and improved battery chemistry and materials {[82]}.  They have no notion that batteries do not power circuits, but only make source dipoles—and it is the source dipole that then extracts EM energy from the vacuum and pours it out into the external circuit. 

Consequently, they erroneously believe that chemical energy in the battery is expended in order to provide power to the external circuit.  Instead, it is expended only to continuously remake the source dipole, which the closed current loop circuit fiendishly keeps destroying faster than the load is powered.

They also have not investigated deliberately dephasing and decoupling the major ion current within the battery and between the plates, from the electron current between the outside of the plates and the external circuit.  Consequently they have no concept of permissible Maxwellian COP>1.0 battery-powered systems. Instead, battery companies, scientists, and engineers still believe—along with the power companies and most electrodynamicists, and the environmental community—that applying the Lorentz symmetrical regauging to the Heaviside-Maxwell equations retains all the Maxwellian systems.  It does not.  Instead, it arbitrarily discards all Maxwellian systems which are permitted by the laws of nature and the laws of physics to produce COP>1.0!

University Community

The University community mostly supports the prevailing EM view.  It also suffers from the rise of common "greed" in the universities themselves.  The professor now must attract external funding (for his research, and for his graduate students—and especially for the lucrative "overhead" part of the funding which goes to the University itself).  The research funds available for "bidding" via submitting proposals, are already cut into "packages" where the type of research to be accomplished in each package is rigorously specified and controlled.  Research on COP>1.0 systems is strictly excluded.  Dramatic revision of electrodynamics is excluded.

Unless the professor successfully bids and obtains packages and their accompanying funding, he is essentially ostracized and soon discharged or just "parked" by the wayside.  Also, if he tries to "go out of the box" in his papers submitted for publication, his peer reviewers will annihilate him and his papers will not be published.  Shortly he will effectively be blacklisted and it will be very difficult for him to have his submitted papers honestly reviewed, much less published.  Again, that means no tenure, no security, and eventual release or "dead-end parking" by the university. 

When one looks at the "innovative" packages so highly touted, they either (1) are research focused upon some approved thing such as hot fusion—which has spent billions and has yet to produce a single watt on the power line, and cannot do so in any reasonable time before the collapse of the Western economy—or (2) use clever buzzwords for things which are actually "more of the same" and "in the box thinking" with just some new words or twists thrown in for spin control.

Meanwhile, all this makes for a self-policing system, which rewards conservatism—conservative publications, conservative research, conservative thinking, conservative teaching, etc.  In short, it selects and approves electrical power system research that is "too little, too late" to solve the world energy crisis in time, and ruthlessly rejects all the rest.  It also makes for a self-policing system which roots out and destroys (or parks on the sidelines) those professors, graduate students, and post-docs who—given a chance to be highly innovative and "out of the box" researchers—might upset the status quo.

In short, the scientific community is itself the greatest arch foe of high innovation, just as Planck indicated.  The university generally typifies and reflects that overall attitude because its outside research funds are controlled and managed by the upper echelons of the organized Big Science community and the governmental community.

Government Community—Technical

The technical part of the U.S. government research community is drawn from the universities, private industry, etc.  It mostly reflects an even more conservative group than the universities.  Again, papers published and funding are the major requirements, within given and largely accepted scientific constraints.  Further, the managerial government scientists must compete for funding, annual budgets, etc. and have their own "channel" constraints from on high.  At the top levels (such as NSF and NAS), cross-fertilization by the aims and perceptions of the conservative scientific community leaders is achieved. 

Hence the government technical community is largely constrained in two fashions: (1) by its own forced competition for funds, facilities, positions, programs, etc., and (2) by its strong cross-fertilization from the top scientific personnel in NSF, NAC, etc.  Individual scientists also face the need to publish or perish, and so are further constrained by the reviewers etc. of the journals.

Most managers within the government scientific community are striving to scamper up the managerial ladder, much as managers elsewhere.  One's power and prestige rises as one's position level rises—and particularly as the part of the government's research budget rises that one controls.  There is a fine tight rope to walk.   As one gains control of more government budget for research, one becomes a powerful influence on the large research corporations which will submit very complex and extensive proposals for the funds.

A sort of "common understanding" thus arises between industry leaders, higher government research leaders and managers, etc.  This can be so profound that the practical result is almost a sort of "collusion by common understanding" between the government and industrial complexes and a fusion into one consortium—essentially the "military-industrial complex" which President Eisenhower warned Americans against.

The result is that the government managers in their Request for Quotations (RFQs) use words such as "out of the box" and "highly innovative".  However, they rarely will fund such proposals because they simply cannot obtain approval for such budgets and programs from "higher up the chain".  As witnessed by the ultrawideband (UWB) radar controversy, the government technical community is even more resistant to innovation and change than is the civilian technical community. 

As an example, the early UWB radar pioneers (Harmuth, Barrett, etc.) were attacked by entrenched government scientists and government scientific organizations with a viciousness rarely seen in the annals of government science.  The objection raised was that sinusoidal EM waves could not do such things—even though the UWB radar used nonsinusoidal EM waves.  Further, small UWB radar sets were commercially available and used to detect voids in concrete structures, the ground, etc.  The real reasons for the violent attacks were the prestige and power of the Stealth community at the time—and because UWB radar had the implication of tracking Stealth vehicles readily.

Interestingly, the arch foes of UWB at the time, today would have us believe they are "staunch experts" in the UWB field.  To understand their remarkable metamorphosis, one need only recall Arthur C. Clarke's words, quoted earlier.

In the COP>1.0 EM energy field, we are still rather much at the stage where the UWB researchers started.  We are still in the "violent attack, personal insults, character assassination, slander, libel, etc." stage.  Sadly, such ad hominem savagery is by scientists who themselves have no notion of how electromagnetic circuits are actually powered, and who—like ostriches—still have their heads buried in the sand back there in the 1880s when Lorentz discarded the enormous Heaviside energy flow component.

Government Community—Non Technical

Here we have a rather mixed situation.  The nontechnical person—e.g., a Senator or a Congressperson—is operating under a distinct disadvantage.   In taking the stance that much better electrical power systems can readily be achieved, he or she is in fact opposing almost the entire set of University, Government Technical, University, Power Company, Battery Company, and Organized Science communities.  Further, in most cases his technical advisors are themselves from one or the other of those communities, and likely to go back into that community or those communities when the Senator or Congressperson leaves office, or even before.  So the Congress and the non-technical government community at large operate at a great disadvantage. 

As an example, admittedly there are some very misguided unorthodox energy system inventors and scientists out there, who in the guise of furthering COP>1.0 systems actually contribute to the problem rather than to the solution.  A few do not even realize that they cannot properly measure a "spiky" output with an RMS meter!  Some are also more interested in selling "dealerships" and "stock" than in furthering the science of COP>1.0 systems.  Few have submitted their purported COP>1 devices to rigorous testing by an independent, Government-certified test laboratory {[83]}. 

This "noise" seriously dilutes the unconventional scientific community's legitimate efforts in COP>1.0 systems.  By playing up such "dilution" and accenting "the crazies", the orthodox scientific community often convinces government nontechnical managers and personnel that the unorthodox scientific COP>1.0 community is comprised only of lunatics, charlatans, stock-scam artists and misguided crank inventors.

Such of course is not the case.  A goodly number of reputable, skilled scientists are seriously struggling with the problems of developing COP>1.0 EM power systems and devices.  A few are also struggling to develop an adequate theory of such systems.  Progress is slowly being made and has been made, in spite of the harassment {[84]}.

The independent assessments that Congress once enjoyed with the OTA are no more because the OTA was abolished.  Now the committees, subcommittees, and individual Congresspersons and Senators are largely on their own, with their own staffs and their own technical advisors.

            That said, nonetheless it can be seen by savvy Senators and Congresspersons that the U.S. Ship of State is headed for a great economic bust, and probably the greatest one of all time.

The Government Non-Technical community (the Senate and the Congress, in particular) is in far better shape than the Government Technical community, to appreciate the world implications of the pending economic disaster.  I am hopeful that both the environmentalists and the Government Non-Technical community will rapidly unite in a common goal to get this vacuum energy program launched, under a National Emergency declaration.  If so, then they can solve the energy crisis and the pending economic crisis, in fairly short order, and permanently.

In Conclusion

There is an even more ominous specter looming behind the shadow of the coming great economic collapse.  When national economies get strained to the breaking point—with some of them failing, etc. worldwide as the price of oil escalates—the conflicts among nations will increase in number and grow in intensity.  About a year or so ahead of the "Great Collapse" of the world economies, the intensity and desperation of the resulting national conflicts will have increased to the breaking point.

Some 25 nations already have weapons of mass destruction (WMD)—including nuclear warheads; missile, aircraft, boat, and terrorist delivery systems; biological warfare weaponry; and other advanced weapons {9} {10}, etc. {[85]} {[86]}

Any knowledgeable person knows that hostile terrorist agents are already on site here in the U.S. {[87]}, and some will have smuggled in their WMDs.  It is not too difficult to surmise that some of those missing Russian "suitcase nukes" probably wound up right here in the U.S., hidden in our population centers {[88]}.  Or that some of Saddam Hussein's large stock of anthrax has been spirited into the U.S. as well.  As is well known, the threat from weapons of mass destruction is now officially recognized as the greatest strategic threat facing the U.S.  It is not a matter of if the WMD weapons will be unleashed, but when.

            If one transposes that recognized escalating WMD threat onto the escalating economic pressures worldwide, then another factor comes into play—the dark side of the Mutual Assured Destruction (MAD) concept.   We have opted (at least to date) not to defend our populace.  The U.S. government has deliberately placed U.S. population centers in a defenseless situation so that their destruction is "assured" once the WMD balloon really goes up.

The insanity of the MAD concept is revealed when war preparations by many nations start to be perceived—as they will be, when the conflicts intensify sufficiently and the looming economic collapse tightens the cinch on the nations of the world.

Without any protection of its populace, a defending nation has to fire on perception of nuclear preparations by its adversaries, if that nation is to have even the slightest chance of surviving. 

At about that 2007 date when a nation sees its adversaries preparing WMD and nuclear assets for launch or use in ongoing intense conflicts, at some point that nation must pre-empt and fire massively, or accept its own "assured destruction". 

The only question in MAD is whether the assured destruction shall be mutual or solitary.

            So one or more nations will fire, immediately moving all the rest into the "fire on perception" mode.  Very rapidly, the situation then escalates to the all-out worldwide exchange so long dreaded.  This massive exchange means the destruction of civilization itself, and probably much of the entire biosphere for decades or centuries.  Such escalation from one or more initial nuclear firings has been shown for decades by all the old strategic nuclear studies.  It is common knowledge to strategic analysts unless one engages in wishful thinking.

            Eerily, this very threat now looms in our not too distant future, due in large part to the increasing and unbearable stresses that escalating oil prices will elicit.

            So about seven years or so from now, we will enter the period of the threat of the Final Armageddon, unless we do something very, very quickly now, to totally and permanently solve the present "electrical energy from oil" crisis.

            This is really why we must have a National Emergency proclamation, and a Manhattan Project. Mass manufacturing, deployment, and employment of replacement electrical power systems must begin in earnest in early 2004.

            In my estimate, the point of no return for developing the self-powering replacement systems is about the end of 2003.  If by early 2004 we do not have multiple types of vacuum-energy powered systems rolling off the assembly lines en masse, then we shall overshoot the point of no return.  In that case, it matters not whether the systems then become available or not.  They will then be too late to prevent the great Armageddon and the destruction of civilization.

            Personally, the present author regards the increasing energy crisis as the greatest strategic threat to the United States in its entire history.  I will do anything within my power to help prevent what I perceive to be the looming economic collapse of the Western world, preceded or accompanied by a sudden, explosive, all-out and continuing exchange of the WMD arsenals of most of the world.

            We can still meet this early 2004 production deadline.  It is difficult, but it is definitely a doable at this time.

            We must do it, and we must do it now.  Else the technology for electrical energy from the vacuum will also be "too little, too late."  In that case, not only the world economy but civilization itself will likely be destroyed—not 100 years from now, not 50 years from now, but in less than one decade from now.

In the name of all humanity, let us begin!  Else by the time this first decade of the new millennium ends, much of humanity may not remain to see the second decade.

References and Notes



[1] Tesla, Nikola, “The Problem of Increasing Human Energy,” Century, June, 1900

[2] “The World Bank and the G-7: Changing the Earth’s Climate for Business,” Ver. 1.1, Aug. 1997, IPS

[3] Keeling et al., “Seasonal and interannual variation in atmospheric oxygen and implication for the global carbon cycle”, Nature, Vol. 358, 8/27/92, p.354

[4] Vinnikov, Science, Dec. 3, 1999, p. 1934

[5] Linden, Eugene,“The Big Meltdown,” TIME, Sept. 4, 2000, p.53

[6] Brown, Lester, et al., State of the World, Worldwatch Institute, 1999, p. 25, citing U.N. 1997 report

[7] Epstein, Paul, “Is Global Warming Harmful to Health?” Scientific American, August, 2000, p.50

[8] ibid., p.57

[9] Brown, p. 26

[10] ibid., p. 25

[11] Annual Energy Outlook, DOE Energy Information Administration. EIA-X035

[12] Brown, p. 25

[13] Valone, Thomas, “Future Energy Technologies,” Proceedings of the Annual Conference of the World Future Society, 2000.

[14] US DOE Energy Information Administration, Energy INFOcard, 1999

[15] Future Energy: Proceedings of the First International Conference on Future Energy, Integrity Research Institute, 1999, CD-ROM



[1].         And of course it is said to be accidental that all the manipulative measures and profit-taking happen to coincide with the large increase in demand in the U.S. during the summer vacation and tourist months.

[2].         E.g., see F. Gregory Gause III, "Saudi Arabia Over a Barrel," Foreign Affairs, 79(3), May/June 2000, p. 80-94.  Quoting, p. 82:  "Saudi oil policy is now driven primarily by the immediate revenue needs of a government struggling to maintain a welfare state designed in the 1970s—when money seemed limitless and the population was small—for a society with one of the world's fastest-growing populations."  Our comment is that the financial disarray of the Saudis is seen by Gause as a need to get Saudi Arabia into the World Trade Organization—in other words, into the clutches of globalization. For a resounding exposé of the WTO, see Lori Wallach and Michelle Sforza, Whose Trade Organization? Corporate Globalization and the Erosion of Democracy, published by Public Citizen Foundation  and available by order from the web at http://www.globaltradewatch.org.  Wallach and Sforza reveal and document the machinations of the World Trade Organization as an instrument of globalization and usurpation of national rights.  The WTO is only one of many organizations prepared by the High Cabal (Winston Churchill's term) to establish the return for much of the world to a version of the old feudal capitalism where national governments posed no checks and balances and workers had no rights or benefits.

[3].         NAFTA stands for North American Free Trade Agreement, passed by Congress in 1993, creating a trade and investment region consisting of Canada, the United States, and Mexico.  GATT stands for General Agreement on Tariffs and Trade (Uruguay Round) in 1994, which created the World Trade Organization (WTO).  Other such agreements set in place to initiate world globalization financial control over nations include or have included MAI (Multilateral Agreement on Investment) and OECD (Organization for Economic Co-operation and Development) in which many of the "secret" agreements are prepared and then scurried through passage by "fast track" means where the Congress allows the President to negotiate trade agreements that are then voted on by the Congress without amendment.  Quoting Moisés Naím, "Lori's War," Foreign Policy, Vol. 118, Spring 2000, p. 35,  "…'fast track' is the legislative legerdemain under which Congress allows the president to negotiate trade agreements that are then voted on without amendments.  Without it, the White House has no guarantee that lawmakers will not seek to change the terms of trade agreements reached after lengthy trade talks."  Our comment is that there should be no such guarantee to the White House, since the Congress consists of our duly elected representatives—elected precisely for the purpose of representing the U.S. public rather than the administration.  The "fast track" ploy is one way of bypassing full Congressional discussion, examination, etc. so that the desired globalization control measures can be "sneaked through" without a rigorous examination of their provisions.  In this way, national authority and constitutional provisions can gradually be undermined by a continuing series of such sneak actions.

[4].         According to the International Labour Organization, some 250 million boys and girls between the ages of five and 14 are exploited in hazardous work conditions.  Most of these children live in the developing world—although in industrialized countries such as the United States, hundreds of thousands of underage boys and girls are at work in sweatshops, farm fields, brothels, and on the street. E.g., see Sandy Hobbs, Michael Lavalette, and Jim McKechnie, Child Labor, ABC-CLIO, Inc., 1999.  For a poignant visual and verbal tour through the problem, see Russell Freedman and Lewis Hine, Kids at Work: Lewis Hine and the Crusade Against Child Labor, Houghton Mifflin, Aug. 1994.  The United Nations also has several publications on the problem and its extent.

[5].         As one example, the Russian mafia, together with the GRU and KGB under its new name, are the dominant factors in Russia, Russian business, and the Russian side of relations between the U.S. and Russia.  See particularly Stanislov Lunev and Ira Winkler, Through the Eyes of the Enemy: Russia's Highest Ranking Military Defector Reveals Why Russia Is More Dangerous Than Ever, Regnery, Washington, D.C., 1998.  Quoting p. 12: "When the Soviet Union collapsed and its industries were privatized, there was only one group within Russia with the money to buy the new industries, and that was the Russian mafia.  But the mafia did more than buy the industries—it bought the government."  Quoting p. 13: "The Cold War is not over; the new Cold War is between the Russian mafia and the United States."  Quoting p. 14: "The Soviet Union did not collapse because of 'reform minded leaders' or because of the Reagan administration's brilliantly aggressive strategy (though that strategy played a part).  The truth is that the Russian mafia caused the collapse.  Soviet 'reform' was nothing more than a criminal revolution."

[6].         As another example, the Japanese Yakuza has penetrated most large Japanese corporations, including Japanese banking and to include the national Japanese bank.  E.g., see Michael Hirsh and Hideko Takayama, "Big Bang or Bust?", Newsweek, Sept. 1, 1997, p. 44-45. Some $300 billion or more were extracted by the Yakuza from the Japanese taxpayers in a great land scandal. Japan's banks loaned billions to Yakuza-affiliated real-estate speculators, and the Yakuza would not repay the funds.  The banks were literally too terrified to collect on the $300-600 billion in bad debt that ensnared the banking system.  E.g., when Sumitomo Bank got a little aggressive in collecting loans in Nagoya, its branch manager was killed.  For a summary of this scandal, see Brian Bremner, "How the Mob burned the Banks: The Yakuza is at the center of the $350 billion bad-loan scandal," Business Week, Jan. 29, 1996, p. 42-43, 46-47.  The Japanese government—i.e., the taxpayers—had to absorb this enormous loss.

The Yakuza have achieved the power and status of a hostile nation, operating within U.S.-Japanese corporate relations, within other nations' relations with Japan, and within the oriental communities of foreign states.  Great influence upon the ability or inability of the U.S. government to continue its deficit financing now rests in the hands of the Yakuza.  Effectively, the Yakuza can trigger a U.S. stock market crash at will, by simply shutting off all further Japanese purchase of U.S. government deficit financing bonds.

The Yakuza regard themselves as the last Samurai, still follow the old Bushido concept, and are intensely hostile to the United States for the humiliating defeat of Japan in WW II and for dropping the atomic bomb on Japan.  At the critical time in the coming economic crisis, cessation of Japanese purchase of U.S. Government bonds can and will initiate the financial coup de grace which generates the final and sudden collapse of the U.S. economy, dragging down other economies with it.  It appears that the Yakuza tested the response of the U.S. stock market to this tactic on two occasions, by simply slowing the rate of Japanese purchases of U.S. government bonds.  The immediate drops in the stock market on both occasions showed the efficacy of this financial weapon, whenever the Yakuza wish to employ it.

In the U.S., the Yakuza constitute an important and growing hostile terrorist group, an intense subculture increasing in numbers, and a group biding its time prior to engaging in mass terrorism strikes.  Together with the Aum Shinrikyo, in 1990 the Yakuza leased the operational use of clandestine strategic longitudinal EM wave interferometer weapons in Russia.  They now possess some of the most powerful strategic weapons on earth (see notes 9 and 10, below).

[7].         The recent historic meetings of North and South Korean leaders, with proclamations of cooperation etc., are a healthy sign for the better.  With the former implacable North Korean dictator now dead, the new and younger leader may have less hostile outlook.  However, progress can be made only very slowly, since the Communist apparatus is still in power in the armed forces and the nation.  Only as more of the old die-hard Communist leaders die off, will real progress start to be made in materially lessening the threat posed by North Korea.  That is a process requiring a generation, but at least a start has been made.  For our thesis, that progress is likely to be sufficiently slow that, while it damps the stress curves a little, it has no appreciable effect on the overall thesis of the eruption within the decade of a great conflagration involving weapons of mass destruction.

[8].         Particularly see Lunev and Winkler, ibid., 1998 for the fact that Spetznatz assassination and terror teams are already deployed on site in the United States, as are their WMD weapon caches to include nuclear weapons.  A number of nations of the world have secretly deployed nuclear and biological weapons throughout the interior of their perceived enemy nations, often using diplomatic pouch privilege to bring them directly into the targeted nation.  It is called "dead man fuzing".  The notion was an extension of the MAD concept: with weapons and teams secreted throughout a targeted nation, then the potent threat that, even if one's own nation is destroyed, one can still destroy the foe who did it, supposedly acts as a deterrent.

[9].         Also involved, there are clandestine weapons of far greater power than nuclear weapons, but most of that subject is beyond the scope of this presentation.  For some time we have informed the U.S. government of these developments, the evidence, the events, etc.  An example—current at its time of preparation—is T. E. Bearden, Energetics: Extensions to Physics and Advanced Technology for Medical and Military Applications, CTEC Proprietary, May 1, 1998, 200+ page inclosure to CTEC Letter, “Saving the Lives of mass BW Casualties from Terrorist BW Strikes on U.S. Population Centers,” to Major General Thomas H. Neary, Director of Nuclear and Counterproliferation, Office of the Deputy Chief of Staff, Air and Space Operations, HQ USAF, May 4, 1998.  Copies of a similar presentation were furnished the DoD, Senator Shelby as head of the Senate's Intelligence subcommittee, and Congressman Weldon as head of the House's Intelligence subcommittee efforts, as well as other U.S. government agencies and high ranking officials.

[10].       The earlier clandestine asymmetrical strategic weapons were developed by the former USSR under rigid KGB and GRU control.  The first of these weapons were longitudinal EM wave interferometers; see Lunev and Winkler, ibid. 1998, p. 30: "Other instruments of destruction the Russians have had success with are seismic weapons.  Spitac and other small towns in the Transcaucasus Mountains were almost destroyed during a seismic weapons test that set off an earthquake.  This would have obvious applications on America's west coast and other areas of the world prone to earthquakes."

These are also the weapons obliquely referred to by Defense Secretary Cohen in this statement: "Others [terrorists] are engaging even in an eco-type of terrorism whereby they can alter the climate, set off earthquakes, volcanoes remotely through the use of electromagnetic waves… So there are plenty of ingenious minds out there that are at work finding ways in which they can wreak terror upon other nations…It's real, and that's the reason why we have to intensify our [counterterrorism] efforts."  Secretary of Defense William Cohen at an April 1997 counterterrorism conference sponsored by former Senator Sam Nunn.  Quoted from DoD News Briefing, Secretary of Defense William S. Cohen, Q&A at the Conference on Terrorism, Weapons of Mass Destruction, and U.S. Strategy, University of Georgia, Athens, Apr. 28, 1997.  The present author has been briefing these weapons to DoD and other government agencies for many years.  Most major weapons laboratories in various nations—including China—have now discovered longitudinal EM waves and either have such weapons or are furiously developing them.  As an example of a test by a giant strategic longitudinal EM wave interferometer, see Daniel A. Walker, Charles S. McCreery, and Fermin J. Oliveira, “Kaitoku Seamount and the Mystery Cloud of 9 April 1984,” Science, Vol. 227, Feb. 8, 1985, p. 607-611;  Daniel L. McKenna  and Daniel Walker, “Mystery Cloud: Additional Observations,” Science, Vol. 234, Oct. 24, 1986, p. 412-413.  This was a test in two modes: (a) in a cold explosion mode above the surface of the sea, creating a sudden low pressure zone above the water and accounting for the suction of water from the ocean to form the cloud, and (b) formation of a glowing spherical shell of light in the top of the cloud, and expanding that shell to some 400 miles diameter. The cold explosion can destroy a naval task force at sea or an armored element on the ground, as an example, or take out the personnel in fixed installations and fortified positions.  The intense shell of EM energy duds the electronics of any vehicle (aircraft, missile, satellite) passing through it, by inducing an extremely sharp pulse of electromagnetic energy arising inside the electronics, from local spacetime itself.  Hundreds of tests of these weapons have been observed.

The great advantage of using longitudinal EM waves is that they readily pass right through intervening mass such as the ocean or the earth, with little attenuation.  Hence an underwater nuclear submarine can be destroyed deep beneath the ocean—as witnessed by precisely that test of the first deployed Russian LW weapon to kill the U.S.S. Thresher in April 1963 off the East Coast of the United States.  The totally anomalous jamming signatures on the Thresher's surface companion, the U.S.S. Skylark, positively reveal the nature of the weapon employed.  Kill of the Arrow DC-8 in Gander, Newfoundland was by one of these weapons, with abundant decisive signatures.  The present author published a photograph of the strike of the weapon two weeks earlier, offset from a night shuttle launch in Cape Canaveral, Florida.  This was the same weapon, being used for crew training, which destroyed the Arrow some two week later.  The TWA-800 crash off the East Coast of the U.S. was also such a shoot-down, as have been numerous others over the years, documented by the present author  At least seven nations now possess such longitudinal EM wave interferometer weapons.   Others are working furiously to develop them.  Also, even more powerful weapons of novel kind have been developed and deployed by three nations—neither of which is the United States.

[11].       Proceeding conventionally, it will be 50 years before the organized scientific community will permit these emerging solutions to actually be developed and produced.  This is senseless; as the Manhattan Project in WW II showed, a newly emerging technology can go to production in four years.  Given only that neutron fission of the proper uranium isotope produced more neutrons than were input, the Manhattan Project developed operational atomic bombs of two major types in four years.  An appreciable number of other "waiting areas for such development" exists in science in the literature.  However, they are not usually pushed forward into development for decades due to the continuing resistance of the scientific community to all innovations which threaten the favored projects (such as hot fusion) and favored theories.  Any "scientist in the trenches" is well aware that the progress of science is by means of a continuing massive cat and dog fight, not at all by sweet scientific reason and logic.

[12].       A perhaps excessive harsh characterization of these "in the box" efforts is that they represent "psychological displacement activities" for the scientific community, the government decision makers, and perhaps even a part of the environmental community.  At best these programs represent "Look at all the good things we are doing!".  They must further be assessed with the view that "Look at what they will not do, and what the results of expending all our efforts on them will be: catastrophic economic collapse in a decade or less."

[13].       We strongly point out that Maxwell's equations are purely hydrodynamic equations.  There is thus a 100% correspondence to hydrodynamics and electromagnetic power systems.  Anything that can be done mechanically, or hydrodynamically with fluid flow, can be done with electromagnetic field energy flow, a priori.  It is thus a serious fault of the scientific community in proclaiming that electrical power systems with COP>1.0 are prohibited, because closed systems cannot exhibit COP>1.0.  All such arguments are evanescent, since all they state is that an open EM system far from thermodynamic equilibrium with the active vacuum is what is required.  But the classical electrodynamics (136 years old) used to design and build electrical power systems, does not even model the energy exchange between active vacuum and the system.  To put it mildly, this is a completely inexplicable aberration of the scientific mindset, and it has been such for over a century.

[14].       Open EM systems far from thermodynamic equilibrium with their electrically active vacuum environment are indeed permitted by the Maxwell-Heaviside equations, prior to the arbitrary symmetrical regauging of the equations to yield simpler equations more mathematically amenable (done by Lorenz in 1867 and later by H.A. Lorentz).  The Lorentz condition requires that the system be symmetrical in its discharge of its free excitation energy.  The present closed current loop circuit ubiquitously used in power systems is designed specifically such that the system itself enforces the Lorentz symmetrical discharge of its excitation energy.  Thus one-half of the energy is discharged in the external losses and load, while one-half is discharged to destroy the source dipole actually extracting the EM energy from the active vacuum.  Such design guarantees a system which destroys its intake of free electrical energy from the vacuum faster than it can use part of that energy to power the load.  I.e., it guarantees suicidal systems which can only exhibit COP<1.0.  Every electrical system ever built has been and is powered by electrical energy extracted directly from the seething vacuum, as we explain in the present paper.

[15].       Such open systems far from thermodynamic equilibrium in the active vacuum exchange, rigorously are permitted to exhibit COP>1.0 and power themselves and their loads simultaneously.  By building only that subset of Maxwellian systems that forces Lorentz symmetrical regauging during discharge of the system's excitation energy, our scientists and engineers have in fact simply discarded all those Maxwellian systems not in equilibrium with the vacuum during their excitation discharge.  In short, they simply do not build any such systems, or even design such.  The scientific and engineering communities themselves have directly produced and maintained the present horrible energy crisis and pollution of the biosphere.

[16].       Ludvig Valentin Lorenz, "On the identity of the vibrations of light with electrical currents," Philosophical Magazine, Vol. 34, 1867, p. 287-301. In this paper Lorenz gave essentially what today is called the "Lorentz symmetrical regauging".  Not much attention was paid to the earlier Lorenz work.  Later, H.A. Lorentz introduced the symmetrical regauging of the Maxwell-Heaviside equations, in its present modern form.  Lorentz's influence was so great that symmetrical regauging—which reduced the theory to a subset and discarded all Maxwell-Heaviside systems of COP>1.0 and capable of powering themselves and a load simultaneously—was adopted and utilized.  It is still utilized ubiquitously; e.g., see

[17].       Lorentz symmetrical regauging is still utilized ubiquitously, so that no self-powering systems are designed and developed by our energy scientists and engineers.  E.g., see J. D. Jackson, Classical Electrodynamics, Second Edition, Wiley, New York, 1975, p. 219-221; 811-812.  In symmetrically regauging the Heaviside-Maxwell equations, electrodynamicists assume that the potential energy of a system can be freely changed at will (i.e., that the system can be asymmetrically regauged at will).  They do it twice in succession, but carefully select two such “paired simultaneous asymmetrical regaugings” such  that the two new free force fields that emerge are equal and opposite and there is thus no net force which can be used to dissipate the free excess system energy from regauging and perform work in a load.  In short, they retain only those Maxwellian systems that foolishly oppose and strangle their own ability to freely discharge and use the free energy they first acquire (from the vacuum, by the first asymmetrical regauging).  Thereby the energy scientists arbitrarily discard all those Maxwellian systems which net asymmetrically regauge by changing their own potential energy and also producing a net nonzero force that can be used to discharge the excess free energy in a load without reservation.  Net asymmetrically regauged systems are open dissipative EM systems, freely receiving energy from their active external environment and thus permitted to dissipate the excess regauging energy in loads because they do not strangle that latter ability.  Hence the performance of the arbitrarily-excluded Maxwellian systems is not confined to classical thermodynamics, but is described by the thermodynamics of an open dissipative system.  Such systems can (i) self-organize, (ii) self-oscillate, (iii) output more energy than the operator himself inputs (the excess is freely received from the external active environment) (iv) “power” its own losses and an external load simultaneously (all the energy to operate the system and the load is received freely from the external active environment), and (v) exhibit negentropy.

[18].       We can now show that enormous EM energy flow can be easily and cheaply initiated from the active vacuum, anywhere, at any time.  The basis for this was in fact discovered by Heaviside in the 1880s.  Lorentz knew of this huge energy flow component but discarded it arbitrarily, apparently to avoid being attacked and accused of being a perpetual motion advocate. See H.A. Lorentz, Vorlesungen über Theoretische Physik an der Universität Leiden, Vol. V, Die Maxwellsche Theorie (1900-1902), Akademische Verlagsgesellschaft M.B.H., Leipzig, 1931, "Die Energie im elektromagnetischen Feld," p. 179-186.  Figure 25 on p. 185 shows the Lorentz concept of integrating the Poynting vector around a closed cylindrical surface surrounding a volumetric element.  This is the procedure which arbitrarily selects only a small component of the energy flow associated with a circuit—specifically, the small Poynting component striking the surface charges and being diverged into the circuit to power it—and then treats that tiny component as the "entire" Poynting energy flow.

[19].       The mathematical "trick" used by Lorentz to get rid of this easily and universally evoked giant negentropy, is still employed by electrical scientists and engineers without realizing what is actually being discarded.  For a full explanation, see T.E. Bearden, "Giant Negentropy from the Common Dipole," Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press).  A series of excellent papers by the Alpha Foundation's Institute for Advanced Study (AIAS) have also been published, approved for publication, or submitted for consideration, in leading journals.  An example is M.W. Evans, T.E. Bearden et al., "Classical Electrodynamics without the Lorentz Condition: Extracting Energy from the Vacuum," Physica Scripta, Vol. 61, 2000, p. 513-517.  A most formidable new AIAS paper, "Electromagnetic Energy from Curved Spacetime," has been submitted to Optik and is in the referee process.  Two related paper giving a very solid basis for vacuum energy are M.W. Evans et al., "The Most General Form of Electrodynamics," and "Energy Inherent in the Pure Gauge Vacuum," both submitted to Physica Scripta and in the referee process.  The theoretical basis for extracting copious EM energy from the vacuum is now unequivocal and either has been published or is rapidly being published in leading journals.

[20].       For example, see Myron W. Evans et al., AIAS group paper by 15 authors, "Classical Electrodynamics Without the Lorentz Condition: Extracting Energy from the Vacuum," 2000, ibid.; "Runaway Solutions of the Lehnert Equations: The Possibility of Extracting Energy from the Vacuum," Optik, 2000 (in press);—"Vacuum Energy Flow and Poynting Theorem from Topology and Gauge Theory," submitted to Physica Scripta;—"Energy Inherent in the Pure Gauge Vacuum," submitted to Physica Scripta;—"The Most General Form of Electrodynamics," submitted to Physica Scripta; "The Aharonov-Bohm Effect as the Basis of Electromagnetic Energy Inherent in the Vacuum," submitted to Optik;—"Electromagnetic Energy from Curved Spacetime," submitted to Optik.

[21].       As an example: The most critical scientist in the Western world, working on the "energy from the vacuum" approach, is Dr. Myron Evans, Founder and Director of the Alpha Foundation's Institute for Advanced Study (AIAS).  Dr. Evans was hounded from his professorial position, has had his life threatened, has been without salary for several years, and fled to the United States for his very life.  He has some 600 papers in the hard literature, and is presently producing—in accord with Dr. Mendel Sachs' epochal union of general relativity and electrodynamics—the world's first engineerable unified field theory, and an advanced electrodynamics fully capable of dealing with and modeling EM energy from the vacuum.  Yet, Dr. Evans lives in the United States (where he recently became a naturalized citizen) at the poverty level.  He can afford only one meal a day, has no automobile, no air conditioning, and continues epochal work under a medical condition that would stop any ordinary person less scientifically dedicated.  He continues to be vilified and viciously attacked by elements of the scientific community, even though other elements are of much assistance in publishing and reviewing his papers, etc.  It is a remarkable commentary upon the sad state of our scientific community that such a scientist and such epochal work, of tremendous importance to both the United States and all humanity, must continue in such circumstances.  Meanwhile, the scientific community spends billions on vast projects of little significance in general, and of no significance at all in avoiding the coming world economic collapse and the destruction of civilization.  If this paper should fall into sympathetic hands which can obtain funding for Dr. Evans, then this author most fervently urges that such be accomplished at all speed.  The fate of most of the civilized world may well hinge upon such a simple thing, and upon such an insignificant expenditure.

[22].       These are listed in M.W. Evans et al., "Classical Electrodynamics Without the Lorentz Condition: Extracting Energy from the Vacuum," 2000, ibid.

[23].       This system exists in small working prototype already, but I am under a nondisclosure agreement and cannot reveal the details of the process or the identity and location of the inventor.  The system is capable of being rapidly scaled up to meet the 2003 critical milestone of "ready for mass production".  One can expect up to a COP = 4 from this process.

[24].       In an electrical power system, Coefficient of Performance (COP) may be taken as the average energy dissipated in the load divided by the average energy furnished to the system by the operator.  Or, it may be taken as the average power dissipated in the load divided by the average power dissipated in the input process.  COP can be taken across any component, several components, or the entire system.  The COP of a normal generator itself may be 0.9, for example, while when the entire system including the heater, etc. is taken into account, the system COP may be only 0.3.  For COP>1.0, excess energy must be furnished to the system by the external environment, while only part of the energy (or none of it) is input by the operator.

[25].       The Kawai process, Johnson process, and the magnetic Wankel engine are ideal for this purpose.

[26].       T.E. Bearden, "Bedini's Method For Forming Negative Resistors In Batteries," Proceedings of the IC-2000, St. Petersburg, Russia, July 2000 (in press).

[27].       Teruo Kawai, "Motive Power Generating Device," U.S. Patent No. 5,436,518.  Jul. 25, 1995.  Applying the Kawai process to a magnetic motor essentially doubles the motor's efficiency.  If one starts with high efficiency magnetic motors of, say, COP = 0.7 or 0.8, then the new COPs will be 1.4 and 1.6.  Two Kawai-modified high efficiency Hitachi motors were in fact independently tested by Hitachi and yielded COP 1.4 and 1.6 respectively.

[28].       See T.E. Bearden, “The Master Principle of EM Overunity and the Japanese Overunity Engines,”  Infinite Energy, 1(5&6), Nov. 1995-Feb. 1996, p. 38-55; “The Master Principle of Overunity and the Japanese Overunity Engines: A New Pearl Harbor?”, The Virtual Times, Internet Node www.hsv.com, Jan. 1996.  The principle of the magnetic Wankel engine is self-evident from the drawings alone.

[29].       Johnson, Howard R., "Permanent Magnet Motor."  U.S. Patent No. 4,151,431,  Apr. 24, 1979; "Magnetic Force Generating Method and Apparatus," U.S. Patent No. 4,877,983, Oct. 31, 1989; "Magnetic Propulsion System," U.S. Patent No. 5,402,021,  Mar. 28, 1995.

[30].       In magnetic materials, the presence of two electrons near each other and having parallel spins results in the presence of a very strong force tending to flip the spin so that they are antiparallel.  The forces between the electrons due to spin geometry are exchange forces of quantum mechanical nature.  In complex assemblies of different magnetic materials comprising a single stator or rotor magnet, the shapes and structures can be produced so that, as the rotor moves by the attracting stator and enters the usual back mmf zone, the powerful spin force is suddenly unleashed by the geometry, relative field strengths, and movement.  This triggers the release of a violent pulse of magnetic field that greatly overrides the back mmf and strongly repels the rotor on out of this "gate" region where the exchange force is triggered.  Exchange force pulses may momentarily be 1,000 times as strong as the magnetic field H, or in some cases even stronger.  Evoking these responses automatically by the materials themselves, at controlled times and directions, produces the open system freely adding rotary energy from its vacuum exchanges inside the nonlinear materials.  Johnson has been able to achieve this effect consistently, opening the way for a legitimate self-powering permanent magnet motor.  We accent that the electrons involved are in direct energy exchange with the vacuum, and the exchange force energy comes from the violently broken symmetry in that vacuum exchange.  Multivalued magnetic potentials and hence nonconservative magnetic fields arise naturally in magnetic theory anyway.  However, conventional scientists exert enormous effort to eliminate such effects or minimize them—when in fact what is needed is to deliberately evoke and use them to produce systems with COP>1.0.

[31].       Surrounding every dipolar EM circuit there exists a vast flow of nondiverged EM energy which misses the circuit entirely and is not presently accounted (thus "dark") in electrical power systems and circuit theory.  Heaviside discovered it, Poynting never realized it, and Lorentz discarded it.  He discarded it because (a) he reasoned it was physically insignificant since it did nothing in the circuit, and (b) no one had the foggiest notion where such an enormous flow of EM energy—pouring from the terminals of every battery and generator—could possibly be coming from.  The trick Lorentz used to arbitrarily discard it is still used by electrodynamicists ubiquitously.  For a full background, see T.E. Bearden, "Giant Negentropy from the Common Dipole," Proc. IC-2000 (ibid.); "On Extracting Electromagnetic Energy from the Vacuum, " Proceedings of the IC-2000, St. Petersburg, Russia, July 2000 (in press); "Dark Matter or Dark Energy?", Journal of New Energy, 2000 (in press).

[32].       Energy cannot be created or destroyed, but only changed in form.  Changing the form of energy is called "work".  When one joule of collected energy is "dissipated" to perform one joule of work, one still has one joule of energy remaining after that joule of work has been done.  The energy is now just in a different form.  Scattering of energy in a resistor, e.g., is perhaps the simplest way of performing work, and known as "joule heating".  However, for a thought experiment: If the resistor is surrounded by a phase conjugate reflective mirror surface, much of the scattered energy will be precisely returned back to the resistor as re-ordered energy.  It can indeed be "reused" by again being scattered in the resistor to do work.  There is no conservation of work law in physics or thermodynamics!  If there is no re-ordering at all, then one can get only one joule of work from one joule of energy changed in form.  The remaining joule of energy in different form (as in heat) is just "wasted" from the system.  But if we deliberately use re-ordering (such as simple passive retroreflection), we can reuse the same joule of energy to do joule after joule of work, changing the form of the energy in each interaction.  Eerily, most of our scientists and engineers are aware that energy can be changed in form indefinitely without loss, but will then argue that energy cannot be recycled and reused.  The scientific prejudice against "COP>1.0" processes and systems is so deep that many scientists are incapable of dealing with the real law of conservation of energy—which is simply that you can never get rid of any energy at all, but can only change its form.  Every joule of energy in the universe, e.g., was present not long after the Big Bang.  Since then, most of those joules of energy have each been doing joule after joule of work, for some 15 billion years.

[33].       Kenneth R. Shoulders, "Energy Conversion Using High Charge Density," U.S. Patent # 5,018,180, May 21, 1991.  See also Shoulders' patents 5,054,046 (1991); 5,054,047 (1991); 5,123,039 (1992), and 5,148,461 (1992).  See also Ken Shoulders and Steve Shoulders, "Observations on the Role of Charge Clusters in Nuclear Cluster Reactions," Journal of New Energy, 1(3), Fall 1996, p. 111-121.

[34].       For a summary of this rapidly developing field, see Diederik Wiersma and Ad Lagendijk, "Laser Action in Very White Paint," Physics World, Jan. 1997, p. 33-37.

[35].       For the early discovery, see V.S. Letokhov, “Generation of light by a scattering medium with negative resonance absorption,” Zh. Eksp. Teor. Fiz., Vol. 53, 1967, p. 1442; Soviet Physics JETP, Vol. 26, 1968, p. 835-839; “Laser Maxwell’s Demon,” Contemp. Phys., 36(4), 1995, p. 235-243.  For initiating experiments although with external excitation of the medium, see N.M. Lawandy et al., "Laser action in strongly scattering media," Nature, 368(6470), Mar. 31, 1994, p. 436-438.  See also D.S. Wiersma, M.P. van Albada, and A. Lagendijk, Nature, Vol. 373, 1995, p. 103.

[36].       For new effects, see D.S. Wiersma and Ad. Lagendijk, "Light diffusion with gain and random lasers," Phys. Rev. E, 54(4), 1996, p. 4256-4265; D.S. Wiersma, Meint. P. van Albada, Bart A. van Tiggelen, and Ad Lagendijk, "Experimental Evidence for Recurring Multiple Scattering Events of Light in Disordered Media," Phys. Rev. Lett., 74(21), 1995, p. 4193-4196; D.S. Wiersma, M.P. Van Albada, and A. Lagendijk, Phys. Rev. Lett., Vol. 75, 1995, p. 1739; D.S. Wiersma et al., Nature, Vol. 390, 1997, p. 671-673; F. Sheffold et al., Nature, Vol. 398, 1999, p. 206; J. Gomez Rivas et al., Europhys. Lett., 48(1), 1999, p. 22-28; Gijs van Soest, Makoto Tomita, and Ad Lagendijk, "Amplifying volume in scattering media," Opt. Lett., 24(5), 1999, p. 306-308; A. Kirchner, K. Busch and C. M. Soukoulis, Phys. Rev. B, Vol. 57, 1998, p. 277.

[37].       A true negative resistor appears to have been developed by the renowned Gabriel Kron, who was never permitted to reveal its construction or specifically reveal its development.  For an oblique statement of his negative resistor success, see Gabriel Kron, "Numerical solution of ordinary and partial differential equations by means of equivalent circuits," J. Appl. Phys., Vol. 16, Mar. 1945a, p. 173.  Quoting: "When only positive and negative real numbers exist, it is customary to replace a positive resistance by an inductance and a negative resistance by a capacitor (since none or only a few negative resistances exist on practical network analyzers)."  Apparently Kron was required to insert the words "none or" in that statement.  See also Gabriel Kron, “Electric circuit models of the Schrödinger equation,” Phys. Rev. 67(1-2), Jan. 1 and 15, 1945, p. 39.  We quote: "Although negative resistances are available for use with a network analyzer,…".  Here the introductory clause states in rather certain terms that negative resistors were available for use on the network analyzer, and Kron slipped this one through the censors.  It may be of interest that Kron was a mentor of Floyd Sweet, who was his protégé.  Sweet worked for the same company, but not on the Network Analyzer project.  However, he almost certainly knew the secret of Kron's "open path" discovery and his negative resistor.  The present author worked for several years with Sweet, who produced a solid state device (the magnetic Vacuum Triode Amplifier) with no moving parts which produced 500 watts of output power for some 33 microwatts of input power.  See Floyd Sweet and T.E. Bearden, "Utilizing Scalar Electromagnetics to Tap Vacuum Energy," Proc. 26th Intersoc. Energy Conversion Engineering Conf. (IECEC '91), Boston, Massachusetts, p. 370-375.

[38].       Shoukai Wang and D.D.L. Chung, "Apparent negative electrical resistance in carbon fiber composites," Composites, Part B, Vol. 30, 1999, p. 579-590.  Negative electrical resistance was observed, quantified, and controlled through composite engineering by Chung and her team.  Electrons were caused to flow backwards against the voltage, with backflow across a composite interface.  The team was able to control the manufacturing process to produce either positive or negative resistance as desired.  The University at Buffalo filed a patent application.  It first placed a solicitation to industry for developments, and offered a technical package to interested companies signing nondisclosure, then suddenly withdrew the offer.  It appears to this author that a "fix" may be in place on the development.

[39].       It is common knowledge that the point-contact transistor could be manufactured to produce a true negative resistor where the output current moved against the voltage. E.g., see William B. Burford III and H. Grey Verner. Semiconductor Junctions and Devices: Theory to Practice, McGraw-Hill, New York, 1965.  Chapter 18: Point-Contact Devices.  Quoting from p. 281: "First, the theory underlying their function is imperfectly understood even after almost a century…, and second, they involve active metal-semiconductor contacts of a highly specialized nature.  …The manufacturing process is deceptively simple, but since much of it involves the empirical know-how of the fabricator, the true variables are almost impossible to isolate or study.   … although the very nature of these units limits them to small power capabilities, the concept of small-signal behavior, in the sense of the term when applied to junction devices, is meaningless, since there is no region of operation wherein equilibrium or theoretical performance is observed.  Point-contact devices may therefore be described as sharply nonlinear under all operating conditions."  We point out that the power limitation can be overcome by arrays of multiple point contacts placed closely together.

[40].       It is the back coupling of the magnetic field from the secondary to the primary windings that forces the dissipation of equal energy in the primary of the transformer as is dissipated in the secondary.  If part of the return current in the secondary circuit bypasses the secondary of the transformer, the back field coupling to the primary is reduced accordingly.  Using a negative resistor as the bypass, the bypass of the current is "for free" (powered by the vacuum and a negentropic process).  Hence the result is a transformer/bypass system with COP>1.0.  In that case, such a system can have a positive clamped feedback from the output of the secondary circuit, into the primary to power it, while still having energy remaining to power a load.  No laws of physics or thermodynamics are violated, once one understands how an EM circuit is actually powered.  E.g., see Bearden, "On Extracting EM Energy from the Vacuum, 2000 (ibid.).

[41].       The Kawai process was seized in the personal presence of the present author and his CTEC, Inc. Board of Directors.  We had reached a full agreement with Kawai to manufacture and sell his units worldwide, at great speed.  Control of his company, his invention, and Kawai himself was taken over in our presence the next morning, and the Japanese contingent was in fear and trembling.

[42].       The magnetic Wankel engine was developed and actually placed in a Mazda automobile.  The back mmf of the rotary permanent magnet motor is confined to a very small angle of the rotation.  As the rotor enters that region, a sudden cutoff of a small trickle current in a coil generates a momentary large Lenz law effect which overrides the back mmf and produces a forward mmf in that region.  The result is that one furnishes a small bit of energy to convert the engine to a rotary permanent magnet motor with no back mmf, but with a nonconservative net magnetic field.  For details, see T.E. Bearden, “The Master Principle of EM Overunity and the Japanese Overunity Engines,” Infinite Energy, 1(5&6), Nov. 1995-Feb. 1996, p. 38-55; “The Master Principle of Overunity and the Japanese Overunity Engines: A New Pearl Harbor?”, The Virtual Times, Internet Node www.hsv.com, Jan. 1996.

[43].       For a history and present status of Japanese organized crime, see Adam Johnston, "Yakuza: Past and Present," Committee for a Safe Society, Organized Crime Page: Japan (available on the Internet).  Michael Hirsh and Hideko Takayama, "Big Bang or Bust?"  Newsweek, Sept. 1, 1997, p. 44-45.

[44].       As a ball-park figure for illustration, a nominal electrical circuit or power system actually extracts from the vacuum and pours out into space some 10 trillion times as much energy flow as the poorly designed "single pass" circuits intercept and utilize.

[45].       However, the orthodox scientists do not know it, because they follow blindly the method introduced by Lorentz a century ago.  Lorentz arbitrarily discarded all that astounding energy flow that pours from the source dipole and misses the circuit, and retained only the tiny, tiny bit of it that strikes the circuit and enters it to power it.  Nothing at all has been done since then to capture more of that huge available energy and use it.  As a result of the ubiquitous Lorentz procedure, most electrical power system scientists and engineers are no longer aware that the huge unaccounted energy flow not striking the circuit even exists.

[46].       The active vacuum interacts profusely with every electrodynamic system, but this is not modeled at all by the scientists and engineers designing and building electrical power systems.  They unwittingly design every system to enforce Lorentz symmetrical regauging during excitation energy discharge, which in effect forces equilibrium in the vacuum-system energy exchange during that dissipation.  Hence, classical equilibrium thermodynamics rigorously applies during use of the collected energy.  Such systems are limited to COP<1.0 a priori.

[47].       In Nobelist Feynman's words: "We…wish to emphasize … the following points: (1) the electromagnetic theory predicts the existence of an electromagnetic mass, but it also falls on its face in doing so, because it does not produce a consistent theory – and the same is true with the quantum modifications; (2) there is experimental evidence for the existence of electromagnetic mass, and (3) all these masses are roughly the same as the mass of an electron.  So we come back again to the original idea of Lorentz – maybe all the mass of an electron is purely electromagnetic, maybe the whole 0.511 Mev is due to electrodynamics.  Is it or isn’t it? We haven’t got a theory, so we cannot say. Richard P. Feynman, Robert B. Leighton, and Matthew Sands, Lectures on Physics, Vol. 2, 1964, p. 28-12.  Also: "We do not know how to make a consistent theory – including the quantum mechanics – which does not produce an infinity for the self-energy of an electron, or any point charge.  And at the same time, there is no satisfactory theory that describes a non-point charge.  It’s an unsolved problem." Ibid., Vol. 2, 1964, p. 28-10.  In fact, "energy" itself is actually a very nebulous and inexact concept.  Again quoting: "It is important to realize that in physics today, we have no knowledge of what energy is."  Ibid., Vol. 1, 1964, p. 4-2.

[48].       E.g., a very recent AIAS paper, M.W. Evans et al., "The Most General Form of Electrodynamics," submitted to Physica Scripta, rigorously shows just how wrong the present limited EM theory is.  Quoting: "…there can be no electro-magnetic field [as such]  in the vacuum.  In other words there can be no electromagnetic field propagating in a source-free region as in the Maxwell-Heaviside theory, which is written in flat space-time using ordinary derivatives instead of covariant derivatives."  The reason is quite simple: spacetime is active and curved.  The great John Wheeler and Nobelist Feynman, e.g., realized that EM force fields cannot exist in space.  They pointed out that only the potential for such fields existed in space, should some charges be made available so that the fields could be developed on them.  See Richard P. Feynman, Robert B. Leighton and Matthew Sands, The Feynman Lectures on Physics, Addison-Wesley, New York, Vol. I, 1963, p. 2-4.

[49].       Max Planck, as quoted in G. Holton, Thematic Origins of Scientific Thought, Harvard University Press, Cambridge, MA, 1973.

[50].       Arthur C. Clarke, in "Space Drive: A Fantasy That Could Become Reality" NSS ... AD ASTRA, Nov/Dec 1994, p. 38.

[51].       E.g., quoting Nobelist Lee:  ""...the discoveries made in 1957 established not only right-left asymmetry, but also the asymmetry between the positive and negative signs of electric charge. … “Since non-observables imply symmetry, these discoveries of asymmetry must imply observables.” [T. D. Lee, Particle Physics and Introduction to Field Theory, Harwood, New York, 1981, p. 184.] On p. 383, Lee points out that the microstructure of the scalar vacuum field (i.e., of vacuum charge) is not utilized.  Particularly see Lee’s own attempt to indicate the possibility of using vacuum engineering, in his “Chapter 25: Outlook: Possibility of Vacuum Engineering,” p. 824-828.  Unfortunately Lee was unaware of Whittaker's profound 1903 decomposition of the scalar potential, as between the ends of a dipole, which gives a much more practical and easily evoked method for re-ordering some of the vacuum's energy, extracting copious EM energy flows from it, and setting the stage for self-powering electrical power systems worldwide.

[52].       The present author has taken the necessary first major step, by using Whittaker decomposition of the scalar potential between the poles of a dipole to reveal a simple, direct, cheap method for extracting and sustaining enormous EM energy flows from the dipole's asymmetry in its energetic exchange with the active vacuum.

[53].       The internal energy available to a generator is the shaft energy we input to it.  In large power plants this is usually by a steam turbine, and heat (from a nuclear reactor, burning hydrocarbons, etc.) is used merely to heat the water in the boiler to make steam to run the steam turbine.  Every bit of all that is just so the generator will have some internal energy made available with which it can then forcibly make the dipole.  That is all that generators (and batteries) do: Use their available internal energy to continually make the source dipole—which our engineers design the circuit to keep destroying faster than the load is powered.

[54].       By "dipole" we mean the positive charges are forced to one side, and the negative charges forced to the other.  This internal "source dipole" formed by the generator or battery is electrically connected to the terminals.

[55].       This has been known in particle physics for nearly 50 years.  It stems from the discovery of broken symmetry by C.S. Wu et al.  in 1957.  A dipole is known to be a broken symmetry in its violent energy exchange with the active vacuum.  Rigorously this means that some of the "disordered" EM energy received by the dipole from the vacuum, is re-ordered and re-radiated as usable, observable EM energy.  Conventional electrodynamics and power system engineering do not model the vacuum's interaction, much less the broken symmetry of the generator or battery dipole in that continuous energy exchange.

[56].       A pictorial illustration of the enormity of the energy flow through the surrounding space, and missing the external circuit entirely, is given by John D. Kraus, Electromagnetics, Fourth Edn., McGraw-Hill, New York, 1992—a standard university text.  Figure 12-60, a and b, p. 578 shows a good drawing of the huge energy flow filling all space around the conductors, with almost all of that energy flow not intercepted by the circuit at all, and thus not diverged into the circuit to power it, but just "wasted" by passing it on out into space.

[57].       That is, the interception of the little "boundary layer" or "sheath" of the flow, right on the surface of the wires.

[58].       Poynting never considered anything but this small little "intercepted" component of the energy flow that actually entered the circuit.  E.g., see  J.H. Poynting, “On the connexion between electric current and the electric and magnetic inductions in the surrounding field,” Proc. Roy. Soc. Lond., Vol. 38, 1985, p. 168.

[59].       In technical terms, the closed current loop circuit forces the Lorentz symmetrical regauging condition during the discharge of the excitation energy collected by the circuit.  By definition, half the energy is thus used to oppose the system function (i.e., to destroy the source dipole) while the other half of the excitation energy is used to power the external losses and the load.  With half the collected energy used to destroy the free extraction of energy from the vacuum, and less than half used to power the load, these ubiquitous circuits destroy their source of free vacuum energy faster than they power their loads.  Hence, we ourselves have to steadily input shaft energy to the generators so that they can continue to reform the dipole.  In the vernacular, that is not the way to run the railroad!

[60].       Maxwell's seminal paper was published in 1864, as a purely material fluid flow (hydrodynamic) theory.  At the time, the electron and the atom had not been discovered, hence the reaction of two opposite charges (positive nuclei, negative Drude electrons) in the wire was not modeled but only one was modeled, etc.  Maxwell omitted half the EM wave in the vacuum and half the energy, resulting in the omission of the EM cause and generatrix of Newton's third law reaction from electrodynamics.  This omission is present in electrodynamics, where the third law reaction appears as a mystical effect without a known cause.  The cause and mechanism is the omitted reaction of the observed effect back upon the non-observed cause.  General relativity, e.g., does include this reaction mechanism from the effect back upon the cause.  However, electrodynamicists still omit half the electromagnetics, half the wave, and half the energy as is easily shown.  E.g., it is demonstrated in every EM signal reception in a simple wire antenna, when the resulting perturbations of both the positive nuclei and the Drude electrons are correctly attributed to their interactions with the incoming EM fields (waves) from the vacuum.

[61].       Mario Bunge, Foundations of Physics, Springer-Verlag, New York, 1967, p. 176.

[62].       T.E. Bearden, "On Extracting Electromagnetic Energy from the Vacuum, " Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press).

[63].       T.E. Bearden, "Bedini's Method For Forming Negative Resistors In Batteries," Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press).

[64].       T.E. Bearden, "Giant Negentropy from the Common Dipole," Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press).

[65].       E.g., a good short summary is given by Dr. Theodore Loder, Institute for the Study of Earth, Oceans, and Space (EOS), University of New Hampshire, Durham, NH in his short paper, "'Comparative Risk Issues' Regarding Present and Future Environmental Trends: Why We Need to be Looking Ahead Now!", prepared for the Senate Committee on the Environment and Public Works, June 1, 2000.  Certainly Dr. Loder and EOS can fully expound on the details of the biospheric pollution from the various contributing factors and processes.

[66].       One need only regard the vehement attacks by the scientific community (and much of the government including national laboratories) upon cold fusion researchers, to understand why many inventors and scientists in the COP>1.0 open dissipative energy field are openly distrustful of the government and government scientists.  Further, the U.S. Patent Office is known to be under rather explicit instructions not to issue patents on COP>1.0 electrical processes and systems.

[67].       E.g., the well-known Bohren experiment produces 18 times as much energy output as the operator must input.  The excess energy is extracted directly from the vacuum.  There has been no program, to my knowledge, seeking to exploit this well-proven COP>1.0 mechanism that has been in the hard science literature for some time.  See Craig F. Bohren, "How can a particle absorb more than the light incident on it?"  Am. J. Phys., 51(4), Apr. 1983, p. 323-327. Under nonlinear conditions, a particle can absorb more energy than is in the light incident on it.  Metallic particles at ultraviolet frequencies are one class of such particles and insulating particles at infrared frequencies are another. For independent validation of the Bohren phenomenon, see H. Paul and R. Fischer, {Comment on “How can a particle absorb more than the light incident on it?’},” Am. J. Phys., 51(4), Apr. 1983, p. 327.

[68].       G. Johnstone Stoney, “Microscopic Vision,” Phil. Mag. Vol. 42, Oct. 1896, p. 332; , “On the Generality of a New Theorem,” Phil. Mag., Vol. 43, 1897, p. 139-142; “Discussion of a New Theorem in Wave Propagation,” Phil. Mag., Vol. 43, 1897, p. 273-280; “On a Supposed Proof of a Theorem in Wave-motion,” Phil. Mag., Vol. 43, 1897, p. 368-373.

[69].       E. T. Whittaker, “On the Partial Differential Equations of Mathematical Physics,” Math. Ann., Vol. 57, 1903, p. 333-355.

[70].       Evans in a private communication has pointed out that Whittaker's method depends upon the Lorentz gauge being assumed.  If the latter is not used, the Whittaker method is inadequate, because the scalar potential becomes even more richly structured.  My restudy of the problem with this in mind concluded that, for the negentropic vacuum-reordering mechanism involving only the dipole and the charge as a composite dipole, it appears that the Whittaker method can be applied without problem, at least to generate the minimum negentropic process itself.  However, this still leaves open the possibility of additional structuring.  The actual negentropic reordering of the vacuum energy (and the structure of the outpouring of the EM energy 3-flow from the charge or dipole) may permissibly be much richer than given by the simple Whittaker structure alone.  In other words, the Whittaker structure used in this paper should be regarded as the simplest structuring of the negentropic process that can be produced, and hence as a lower boundary condition on the process.

[71].       Time-like currents and flows do appear in the vacuum energy, if extended electrodynamic theory is utilized.  E.g., in the received view the Gupta-Bleuler method removes time-like photons and longitudinal photons.  For disproof of the Gupta-Bleuler method, proof of the independent existence of such photons, and a short description of their characteristics, see Myron W. Evans et al., AIAS group paper, "On Whittaker's F and G Fluxes, Part III: The Existence of Physical Longitudinal and Time-Like Photons," J. New Energy, 4(3), Winter 1999, p. 68-71; "On Whittaker's Analysis of the Electromagnetic Entity, Part IV: Longitudinal Magnetic Flux and Time-Like Potential without Vector Potential and without Electric and Magnetic Fields," ibid., p. 72-75.  To see how such entities produce ordinary EM fields and energy in vacuo, see Myron W. Evans et al., AIAS group paper, "On Whittaker's Representation of the Electromagnetic Entity in Vacuo, Part V: The Production of Transverse Fields and Energy by Scalar Interferometry," ibid., p. 76-78.  See also Myron W. Evans et al., AIAS group paper, "Representation of the Vacuum Electromagnetic Field in Terms of Longitudinal and Time-like Potentials: Canonical Quantization," ibid., p. 82-88.

[72].       For a short treatise on the complex Poynting vector, see D.S. Jones, The Theory of Electromagnetism, Pergamon Press, Oxford, 1964, p. 57-58.  In a sense our present use is similar to the complex Poynting energy flow vector, but in our usage the  absolute value of the imaginary energy flow is equal to the absolute value of the real energy flow, and there is a transformation process in between.  This usage is possible because the imaginary flow is into a transducer, which takes care of transforming the received imaginary EM energy into the output real EM energy.  We stress that the word "imaginary" is not at all synonymous with fictitious, but merely refers to what "dimension" or state the EM energy exists in.

[73].       Unfortunately, electrical engineers use the term "power" to also mean the rate of energy flow, when rigorously the term "power" means the rate at which work is done.  We accent that we fully understand the difference, but are using the terminology common to the profession.

[74].       Nobelist Prigogine experienced something very similar when he proposed his open dissipative systems, where the system operations did not lead to the conventional increasing disorder.  To say that he was subjected to the Inquisition is not an exaggeration.  Other scientists have repeatedly been subjected to intense scientific attack and suppression—including Mayer (conservation of energy), Einstein (relativity), Wegener (drifting continental plates), Ovshinsky (amorphous semiconductors), to name just a few of the hundreds who have been attacked in similar fashion.  Science does not proceed by sweet reason, but by a vicious dogfight with no holds barred.  It delights in "wolf pack" attacks upon the scientist with a new idea or discovery.

[75].       And the scientific community is certainly not prepared for the notion of using time as energy, freely and anywhere.  In a sense, one can "burn time as fuel".  Consider this: In physics, the choice of fundamental units in one's physics model is completely arbitrary.  E.g., one can make a quite legitimate physics model having only a single fundamental unit (such is already done in certain areas of physics).  E.g., suppose we make the "joule" (energy) the only fundamental unit.  It follows then that everything else—including the second and therefore time—is a function of energy.  One can utilize the second as c2 joules of energy.  Hence, the flow of time would have the same energy density as mass.  After Einstein, the atom bomb, and the nuclear reactor, of course, we are all comfortable with the fact that mass is just spatial energy compressed by the factor c2.  So we really should not be too uncomfortable at the notion that time itself is energy compressed by the factor c2.  In this case, if every second of the passage of time, we were to convert one microsecond into ordinary EM spatial energy, we would produce some 9´1010 joules of EM energy.  Since that is done each second, this would give us the equivalent of the output of 90 1000-megawatt power plants.  If only 1.11% efficient, the conversion process would yield the equivalent of one 1000-megawatt power plant.

            In fact, it is in theory possible to do such a conversion, and we have previously indicated the various mechanisms involved.  There are also some rough experimental results that are at least consistent with the thesis.  The interested reader is referred to T.E. Bearden, "EM Corrections Enabling a Practical Unified Field Theory with Emphasis on Time-Charging Interactions of Longitudinal EM Waves," J. New Energy, 3(2/3), 1998, p. 12-28.  See also the author's similar paper with the same title, in Explore, 8(6), 1998, p. 7-16.  We believe that the real energy technology for the second half of this century is based on use of time for fuel.  The fundamental reactions and principles also enable a totally new form of high energy physics reactions, where very low spatial energy photons are the carriers (their time components carry canonical time-energy, so that the highest energy photons of all, given time-energy conversion, are low frequency photons.  These new reactions (given in the references cited) are indeed consistent with the startling nuclear transformation reactions met at low (spatial) photon energies in hundreds of successful cold fusion experiments worldwide.

[76].       A classic example is given by Paul Nahin in his Oliver Heaviside: Sage in Solitude, IEEE Press, New York, 1988, p. 225.  Quoting: "J.J. Waterston's paper on the kinetic theory of gases, in 1845, was rejected by the Royal Society of London.  One of the referees declared it to be 'nothing but nonsense, unfit even for reading before the Society.' ... "Waterston's dusty manuscript was finally exhumed from its archival tomb forty years later, because of the efforts of Lord Rayleigh..."  Our comment is that the same scientific attitude and resistance to innovative change prevails today.  As the French say, "Plus ça change, plus c'est la même chose!"

[77].       E.g., see G. Nicolas and I. Prigogine, Exploring Complexity, Piper, Munich, 1987 (an English version is Exploring Complexity: An Introduction, Freeman, New York, 1989); Ilya Prigogine, From Being to Becoming: Time and Complexity in the Physical Sciences, W.H. Freeman and Company, San Francisco, 1980. In 1977, Prigogine received the Nobel Prize in chemistry for his contributions to nonequilibrium thermodynamics, especially the theory of dissipative structures.

[78].       E.g., see, Moisés Naím, "Lori's War," Foreign Policy, Vol. 118, Spring 2000, p. 28-55.  See particularly Lori Wallach and Michelle Sforza, Whose Trade Organization? Corporate Globalization and the Erosion of Democracy, published by Public Citizen Foundation  and available by order from http://www.globaltradewatch.org.  Perusal of the leading environmental activist web sites now shows a significant and rising awareness that globalization is merely the surface façade of an older, imperial, feudalistic capitalism where checks and balances established by national states are being slowly and methodically bypassed.

[79].       The interested reader is referred to Andrew A. Marino, Powerline Electromagnetic Fields and Human Health, at http://www.ortho.lsumc.edu/Faculty/Marino/Marino.html. Particularly see “Chapter 5, Blue-Ribbon Committees and Powerline EMF Health Hazards,” and “Chapter 6: Power-Industry Science and Powerline EMF Health Hazards.” Biophysicist Marino is one of the leaders in the field and has been personally involved in many skirmishes with powerline-dominated studies and findings.  As an example, quoting from Chapter 6: “Neither scientists nor the public can rely on power-industry research or analysis to help decide whether powerline electromagnetic fields affect human health because power-industry research and analysis are radically misleading.”  There are many other reports in the literature, which also show effects of EM nonionizing radiation on cells, including detrimental effects.

[80].       Becker studied not just the immune system—which "heals" nothing at all, not even its own damaged cells—but also the cellular regenerative system.  He and others found, e.g., that tiny trickle currents and potentials—either steady or pulsed—placed across otherwise intractable bone fractures, would result in a rather astounding set of cellular changes which led to healing of the fracture by deposit of new bone.  Eerily, Becker showed that the red blood cells coming into the area and under the EM influence, would shuck their hemoglobin and grow cellular nuclei (i.e., dedifferentiate back to an earlier cellular state).  Then these cells would redifferentiate into the type of cells that made cartilage.  Then those cells would differentiate into the type of cells that make bone, and be deposited in the fracture to "grow bone" and heal the fracture.  Incredibly, this is the only true "healing" modality in all Western medical science—which is otherwise built upon the theory of intervention rather than healing.  After the intervention (which may be quite necessary!), the body's cellular regenerative system—or what is left of it after damage by such interventions as chemotherapy, etc.—is left entirely upon its own to restore the damage (heal the damaged cells and tissues).  Becker was twice nominated for a Nobel Prize.  However, because he also testified in court against power companies, giving testimony as an expert witness that EM radiation from power lines could indeed induce harmful conditions in some exposed people, he was suppressed and eventually forced to retire.

[81].       See Robert O. Becker and Andrew A. Marino, Electromagnetism and Life, State University of New York Press, Albany, 1982.   This reference gives a nice summary of EM bioeffects from the orthodox view, current as of the publication date.  For Becker's work with the cellular regenerative system, see particularly R.O. Becker, "The neural semiconduction control system and its interaction with applied electrical current and magnetic fields," Proc. XI Internat. Congr. Radiol., Vol. 105, 1966, p. 1753-1759, Excerpta Medica Foundation, Amsterdam.  See Becker, "The direct current field: A primitive control and communication system related to growth processes," Proc. XVI Internat. Congr. Zool., Washington, D.C., Vol. 3, 1963, p. 179-183.

[82].       For an overview of the ansatz of present battery technology, see David Linden, Editor in Chief, Handbook of Batteries, Second Edition, McGraw Hill, New York, 1995; Colin A. Vincent and Bruno Scrosati, Modern Batteries: An Introduction to Electrochemical Power Sources, Second Edition, Wiley, New York, 1997.  For a process to make a battery include a negative resistor and exhibit COP>1.0, see Bearden, "Bedini's Method For Forming Negative Resistors In Batteries," Proc. IC-2000, St. Petersburg, Russia (in press).

[83].       Such laboratories are private and professional testing companies, where the U.S. government has certified their expertise and qualifications, their testing to NIST, IEEE, and U.S. government standards, their use of calibrated instruments, and the experience and ability of their professional test engineers and scientists.  Such labs are routinely and widely used by aerospace firms.  A Test Certificate from such a lab is acceptable by the courts, the U.S. Patent and Trademark Office, the U.S. government (which requires it on many contracts), and by the U.S. scientific community.  A goodly number of these laboratories are available throughout the U.S.

[84].       A few struggling publications in the "new energy" field are crucial to continued progress.  The major ones are Journal of New Energy (Dr. Hal Fox, publisher), Infinite Energy (Dr. Eugene Mallove, publisher), and Explore (Chrystyne Jackson, publisher).  Independent sustaining funding for these publications is urgently needed.  We also highly commend the Department of Energy's Transportation group for maintaining a DOE website carrying the advanced electrodynamics papers of the Alpha Foundation's Institute for Advanced Study (AIAS).  Funding for the AIAS is also urgently needed, to continue this absolutely essential theoretical work that is placing a solid physics foundation under the program of extracting and using EM energy from the vacuum.

[85].       Some recommended publications of interest are: Joshua Lederberg, Editor, Biological Weapons: Limiting the Threat, MIT Press, Cambridge, MA, 1999, with a foreword by Defense Secretary William S. Cohen; Richard A. Falkenrath, Robert D. Newman, and Bradley A. Thayer, America's Achilles Heel: Nuclear, Biological, and Chemical Terrorism and Covert Attack, MIT Press, 1998; Wendy Barnaby, The Plague Makers: The Secret World of Biological Warfare, Vision Paperbacks, Satin Publications Ltd., London, 1999 (a most readable and educational book for the nonspecialist), U.S. Congress, Office of Technology Assessment, Proliferation of Weapons of Mass Destruction: Assessing the Risks, Government Printing Office, Washington, D.C., 1993 (a major study on WMD and the risks to the U.S., including to the U.S. civilian population); Global Proliferation of Weapons of Mass Destruction, Part I, Senate Hearing 104-422, Hearings Before the Permanent Subcommittee on Investigations of the Committee on Governmental Affairs, U.S. Senate, Oct. 31 and Nov. 1, 1995.

[86].       Unfortunately, the extant unclassified references on longitudinal EM and more advanced EM weapons seem to be the publications by the present author, e.g., T.E. Bearden, "Mind Control and EM Wave Polarization Transductions, Part I", Explore, 9(2), 1999, p. 59; Part II, Explore, 9(3), 1999, p. 61; Part III, Explore, 9(4,5), 1999, p. 100-108;—"EM Corrections Enabling a Practical Unified Field Theory with Emphasis on Time-Charging Interactions of Longitudinal EM Waves," Journal of New Energy, 3(2/3), 1998, p. 12-28;—Energetics of Free Energy Systems and Vacuum Engine Therapies, Tara Publishing, Internet node www.tarapublishing.com/books, July 1997;—Gravitobiology: A New Biophysics, Tesla Book Co., P.O. Box 121873, Chula Vista, CA 91912, 1991;—Fer-de-Lance, Tesla Book Co., 1986;—AIDS: Biological Warfare, Tesla Book Co., 1988;—Soviet Weather Engineering Over North America, 1-hour videotape, 1985;—Energetics: Extensions to Physics and Advanced Technology for Medical and Military Applications, CTEC Proprietary, May 1, 1998, 200+ page inclosure to CTEC Letter, “Saving the Lives of mass BW Casualties from Terrorist BW Strikes on U.S. Population Centers,” to Major General Thomas H. Neary, Director of Nuclear and Counterproliferation, Office of the Deputy Chief of Staff, Air and Space Operations, HQ USAF, May. 4, 1998;—"Overview and Background of KGB Energetics Weapons Threat to the U.S.," updated Jan. 3, 1999, furnished to selected Senators and Congresspersons.

[87].       As an example, for decades Castro ran guerrilla and agent training camps in Southern Mexico.  Many of the graduates of those camps—trained terrorists all—have been infiltrated across the U.S. border and into the U.S., to bide their time and wait for instructions.  Some estimates are that several thousand such Castro agents alone are already on site and positioned for sabotage, poisoning of water supplies, destruction of transmission line towers, destruction of key bridges, etc.  Several other nations hostile to the U.S. are also known to have agent teams already on site within the U.S.  The new form of warfare/terrorism is to introduce the "troops" into the adversary's nation and populace in advance, as well as weapons caches, etc.  So such preparations have definitely been accomplished within the United States, and undoubtedly some are still in progress and ongoing.

[88].       E.g., see Stanislov Lunev and Ira Winkler, 1998, ibid. Quoting, p. 22: "Though most Americans don't realize it, America is already penetrated by Russian military intelligence to the extent that arms caches lie in wait for use by Russian special forces—or Spetznatz."  

Quoting, p. 26: "It is surprisingly easy to smuggle nuclear weapons into the United States.  A commonly used method is for a Russian airplane to fly across the ocean on a typical reconnaissance flight.  The planes will be tracked by U.S. radar, but that's not a problem.  When there are no other aircraft in visual range, the Russian airplane will launch a small, high-tech, stealth transport missile that can slip undetected into remote areas of the country.  The missiles are retrieved by GRU operatives.

            Another way to get a weapon into the country is to have an 'oceanographic research' submarine deliver the device—accompanied by GRU specialists—to a remote section of coastline.

            Nuclear devices can also be slipped across the Mexican or Canadian borders.  It is easy to get a bomb to Cuba and from there transport it to Mexico.  Usually the devices are carried by a Russian intelligence officer or a trusted agent."

 

 

 

 

 

The Author

 

Dr. Thomas Bearden (Lieutenant Colonel U.S. Army - Retired) is presently the President and Chief Executive Officer, CTEC, Inc., a Fellow Emeritus of Alpha Foundation's Institute of Advanced Study (AIAS) and a Director of the Association of Distinguished American Scientists (ADAS).  He has a Science PhD, a MS in Nuclear Engineering, BS in Mathematics, with minor in Electronic Engineering as well as a graduate of C&GSC, U.S. Army and graduate of the U.S. Army Guided Missile Staff Officer's Course (equivalent to MS in Aerospace Engineering).  He also has graduate courses in statistics, electromagnetics and numerous missile, radar, electronic warfare, and counter-countermeasures courses.  He had twenty years of active service in the U.S. Army.  His field Artillery, Patriot, Hawk, Hercules, Nike Ajax, and technical research experience was followed by nineteen years of technical research in re-entry vehicles and heat shielding, computer systems, C4I, wargame analysis, simulation and analysis, EW, ARM countermeasures, and strategy and tactics.  He has spent more than 20 years personal research in foundations of electrodynamics and open EM systems far from thermodynamic equilibrium with the active environment, as well as novel effects of longitudinal EM waves on living systems and founded the beginning of a legitimate theory of permissible COP>1.0 electrical power systems.  He is the author or co-author of approximately 200 papers and books and has been connected with four successful COP>1.0 laboratory prototype EM power systems.  He is one of the world’s leading theorists dealing with the hard physics of over-unity energy systems and scalar weapons technology.