Monday, December 26, 2011

The March 29, 2011, E-Cat demonstration

On March 29, 2011, a demonstration of a small E-Cat device was given, with Andrea Rossi, Guiseppe Levi, Sergio Focardi, Sven Kullander and Hanno Essen present, and conducted by Rossi and presumably others.  According to calculations written up by Essen, 25kWh of excess heat were produced by the device over a period of 6 hours.  Given the amount of hydrogen (0.11 g) and nickel (50 g) that were present at the start of the trial, Essen rules out a chemical reaction of some kind.

The demonstration was reported by NyTeknik on April 6.  According to the article, Guiseppe Levi is being paid by Rossi to carry out research on the E-Cat at the University of Bologna.  Sven Kullander is professor emeritus of physics at Uppsala University and chairman of the energy committee of the Royal Swedish Academy of Sciences.  Hanno Essen is an associate professor of physics at the Swedish Royal Institute of Technology and one-time chairman of the Swedish Skeptics Society.  I remember reading somewhere that Kullander was on a fact-finding mission, but I don't recall the context.

An English transcript of a radio interview with Sergio Focardi provides additional historical information on the development of the E-Cat.  Focardi has not been not privy to the undisclosed "catalyst" that is being added by Rossi to the system in order to facilitate the reaction, though he speculates that it has something to do with preventing the hydrogen ions from forming molecular hydrogen.  The interview confirms that gamma radiation is being generated, and for this reason there is lead shielding around the device.  Focardi suggests an interesting explanation for what is going on with the gamma rays:  a proton is somehow being added to the nickel nucleus by way of an unknown pathway.  This results in its transmutation into copper in a highly excited state and the eventual emission of a gamma ray photon.  When the gamma ray leaves, the new copper atom recoils like a cannon in the opposite direction.  If things were indeed happening like this, the account would go a long way to explain the heat that is generated.

Sunday, December 25, 2011

Beta decay, neutron activation analysis and the r-process

A Wikipedia article on neutron activation analysis (NAA) is helpful in understanding a little of what would be involved if inverse beta decay were causing a large number of neutrons to attach to nearby nuclei.  Not only would there be transmutations not unlike those found in the r-process, a phase of rapid nucleosynthesis hypothesized to take place within supernovae that is responsible for creating elements above iron.  In addition, following this activity there would be beta decay of unstable radioisotopes to more stable isotopes which would release gamma rays at specific energy levels, long after the experiment was dismantled, along the lines that Jacques Dufour described in a note on the Widom-Larsen work referred to in an earlier post.  This gamma emission is a property of beta decay that is used in NAA to detect the relative levels of different elements (and presumably isotopes) in a sample.

The process works by bombarding the sample with neutrons from a neutron source.  Following the bombardment transmutations take place and then decay into more stable isotopes according to the half-lives of the elements concerned.  Apparently there is a body of expertise that has developed around analyzing the gamma ray spectra that result.  The neutron bombardment does not damage the sample, but it will remain radioactive at low and possibly medium levels after the analysis.

An important implication seems clear:  if there is a comparable process of electron capture that leads in turn to neutron capture and transmutations, then either
  1. The nickel or palladium used in the experiment will exhibit mild to medium levels of radioactivity afterwards; or
  2. There is some variable that has the effect of selecting transmutations of elements with very short half-lives.
I don't yet know much about the various decay chains that could be involved in such beta decay, but the second possibility could potentially require new physics if it turns out to be true.  We should start with the assumption that the first case is probably the one happening.  If the first case turns out not to be happening in some experiments, this is a good indication that some process other than inverse beta decay is taking place, although I find this scenario implausible, for the reason that it would appear to require that the measurements of the transmutations be in error.  (This conclusion assumes that possibility 2, above, is even less plausible.)

Up to now I haven't paid attention to the radioactivity of the samples that are mentioned in the LENR experiments, but I will start to keep tabs on this detail.  I remember something mentioned in connection with Piantelli, possibly, where a cathode was placed in a cloud chamber after excess heat was exhibited, and it was necessary to wait for two hours before attempting to look closely at the trajectories due to the high number of emissions coming off of the cathode initially.  In other experiments, it may be simply that the post-experiment radioactivity was overlooked.  Assuming that there is radioactivity (an assumption I will proceed with for now), I wonder if the cathodes become unsafe.

This excellent page at the University of Missouri, Columbia, Web site describes neutron activation analysis in further detail.  It mentions several parameters that have bearing on the results of a run:
  • Neutron flux
  • Irradiation time
  • Decay times
  • Measurement time
  • Detector efficiency
  • Isotope abundance
  • Neutron cross-section
  • Half-life
  • Gamma-ray abundance
The page includes a bibliography of books on activation analysis at the bottom.

The r-process is a kind of nucleosynthesis that is hypothesized to take place during supernovae and to have taken place during the Big Bang.  If I understand what I have read, it involves a sufficiently high neutron flux to push atomic nuclei along the neutron drip line to counter the rate at which they beta decay into higher elements.  As the neutrons pile on, unstable isotopes are formed which eventually undergo decay into other elements.  The r-process was set out in a landmark 1957 review paper by Burbidge, Burbidge, Fowler and Hoyle.

The r-process seems to be very similar to what is going on in the nickel and palladium hydride cathodes in the LENR experiments.  This connection was noted by Widom and Larsen in their February 20, 2006, paper.  I'm beginning to think that the spectroscopy results of the LENR experiments are like fingerprints that are unique or almost unique to each experiment and that, for a given graph, one can work backwards to deduce all or most of the important details that went into the reaction:
  • the original composition of the cathode;
  • the contents of the gas or electrolyte;
  • the energy released by the experiment;
  • the range of isotopes that were somehow given preference by the conditions of the experiment by an unknown variable; and
  • the approximate time the experiment ran.
The idea here is that the process would be similar to analyzing the radiation given off by a star, which I assume enables one to infer a number of details about the star.  I wonder if there are computational models on the Internet that can be leveraged to pursue this line of investigation further.

Discussion of Widom-Larsen in New Energy Times, issue 26

"The Widom-Larsen Not-Fusion Theory," New Energy Times, 26, January 11, 2008.  Available at

When Widom and Larsen came out with several papers between 2005 and 2008 setting out their theory that inverse beta decay best explains the evidence for LENR thus far reported, the theory turned out to be relatively controversial within the small group of researchers.  Steven Krivit, author of The Rebirth of Cold Fusion and editor of the New Energy Times Web site, asked a number of people to respond to a set of twenty questions probing various details relating to Widom and Larsen's work.  This discussion was published in New Energy Time's issue 26, which is available at the link above.  Some of the people who are central to the LENR research took part in the discussion, and Krivit was able to obtain some of the views of Richard Garwin, a renowned physicist who worked with Edward Teller on the first hydrogen bomb.

Most of the critiques made for interesting reading, and a handful of them were emotional and lacked objectivity.  My own favorite line of investigation at this point is that there is inverse beta decay going on under the control of several important variables, and the criticisms discussed in this issue provide a range of details that will be helpful in better understanding the implications of this category of explanation.

Richard Garwin was an early critic of the cold fusion research, and he subsequently visited both McKubre's lab at SRI International and a lab in France.  Steven Krivit has published Garwin's report on the visit to SRI International on his Web site, which would make for interesting reading.

There was a discussion between Krivit and Scott Chubb that was related in the article, and one of the things that was raised as an issue by Chubb was that that there were a number of prior articles that Widom and Larsen failed to give credit to.  Following are some of the authors mentioned in this connection:
  • Laili Chatterjee, in a 1998 paper published in Transactions of the American Nuclear Society
  • Mizuno
  • Kozima and his neutron band theory
  • John Fisher and his ideas relating to polyneutrons
  • Li (Xingzhong Li?)
  • George Anderman, in an ICCF1 paper
Lastly, Jacques Dufour, in a note critical of the Widom-Larsen work, mentioned that the theory did not agree with NAA analysis in terms of gamma ray emission, in that emissions should be detected after the experiment has been dismantled (which, presumably, is not something that has been seen).  By "NAA" I think Dufour is talking about neutron activation analysis.

Saturday, December 24, 2011

Widom and Larsen, "Absorption of Nuclear Gamma Radiation" (2005)

A. Widom and L. Larsen, "Absorption of Nuclear Gamma Radiation by Heavy Electrons on Metallic Hydride Surfaces," preprint (September 10, 2005), available at

Widom and Larsen seek to provide an explanation for the lack of hard gamma photons observed in LENR experiments.  They assume that hard photons are a necessary result of the reactions.  They look at the mean free path of the gamma photons and conclude that they are absorbed by the heavy electrons near the surface produced as a result of upward mass renormalization.  After a heavy electron has absorbed a hard photon, a large number of soft photons are emitted in the x-ray and infrared wavelengths.

Widom and Larson, "Neutron catalyzed nuclear reactions" (2006)

A. Widom and L. Larson, "Ultra low momentum neutron catalyzed nuclear reactions on metallic hydride surfaces," European Physical Journal C (2006).

This appears to be the first of an important series of paper in the field of cold fusion.  It suggests that what is taking place in systems is not fusion, per se, in which two positively charged nuclei overcome Coulomb repulsion to fuse into a new nucleus.  Instead what is happening is that inverse beta decay is taking place, where neutrons generated from free electrons and protons.  The neutrons then meander about at low energies and are absorbed, one after another, into the nuclei of nearby atoms.   As they load, one after another, into these atoms, the atomic mass changes and eventually the atoms decay into other elements.  There are different chains of decay, some of which release significant amounts of energy, which manifests itself in part in the form of heat given off by the system.

The paper has main two parts.  The first part attempts to provide support for inverse beta decay through an elaboration of ideas found in earlier literature, and the second part illustrates one concrete radioactive decay path.  The first part gets into some math I'm unfamiliar with, whereas it is not difficult to imagine applying the ideas set out in the second part to the construction of a basic numerical model that attempts to predict, or at least explain, the ratios of residual isotopes found in some of the experiments.

Steven Krivit and Nadine Winocur, "The Rebirth of Cold Fusion" (2004)

Steven Krivit and Nadine Winocur, The Rebirth of Cold Fusion (Los Angeles:  Pacific Oak Press, 2004).

This book covers the developments from Pons and Fleischmann to the 2004 Department of Energy review, which was underway at the time of writing.  The book is less technical than Mallove's Fire From Ice and Beaudette's Excess Heat, and it appears to be written with both a technical and a nontechnical audience in mind.  The authors are clearly passionate about the topic and devote the first few chapters to exploring the various options open to humankind in the way of energy production.  In the middle of the book there is a series of chapters on various types of evidence that have turned up which go into some of the main scientists working in the field and some of the better known experiments.  In addition to the valuable technical information, the authors have been in touch with many of these scientists and are familiar enough with the history of the last twenty years to provide some interesting anecdotes.

Thursday, December 22, 2011

Experiment: Focardi et al., "Evidence of electromagnetic radiation" (2004)

Focardi, S., et al., "Evidence of electromagnetic radiation from Ni-H Systems," in Eleventh International Conference on Condensed Matter Nuclear Science (2004).

Summary: Three Ni-H systems emitted gamma radiation after hydrogen was introduced.  The first system showed excess heat, and the second showed none.  When the third system underwent thermal excitation, the rate of photon emission increased for a short period of time.


In three different experiments, nickel plates were interleaved with heating elements in a closed chamber.  The chamber was first evacuated and then hydrogen gas was introduced.  Emissions in one experiment lasted forty-five days after degassing.  Gamma emission did not always depend on temperature.  In the second experiment, which showed marked gamma activity above background, samples were kept for fifty-two days in a vacuum while measurements were taken of photon emission, before hydrogen was introduced.  They obtained photon emission but not excess heat.  The spectrum lasted for twenty-six days after hydrogen was added.  In the third experiment there was no difference in spectra during the degassing period and the introduction of H2.  In all three experiments, the peak energy in the spectra was the same.  The first system showed excess heat at one point, and Cr and Mn turned up in the nickel samples.  The second system showed no excess heat or neutron emission and nothing unusual was found in the surface analysis.

There are two databases that can help in determining what is happening in a spectrum at a given energy range: GAMQUEST (Lawrence Berkeley National Laboratory) and NUDAT (Brookhaven National Laboratory).

An important possibility here is that what is emitted by the system depends upon how the system is set up.  Questions:  The difference curve in figure 4 is small; is it statistically significant?   What is degassing?  Were there transmutations in the third experiment?  Why were there no transmutations in the second experiment?

Wednesday, December 21, 2011

Experiment: Iwamura et al., "Observation of Nuclear Transmutation" (2004)

Iwamura, Y. et al., "Observation of Nuclear Transmutation Reactions induced by D2 Gas Permeation through Pd Complexes," in Eleventh International Conference on Condensed Matter Nuclear Science (2004), Marseille, France.

Summary:  Transmutations of Ba -> Sm, Cs -> Pr and possibly Sr -> Mo seen in deuterium-loaded Pd/CaO/Pd complexes.  When MgO was substituted for CaO, no positive results were obtained.


A thin layer of Pd, beneath it a layer of CaO and beneath that a subtrate of Pd were used.  At 70 C, Deuterium was loaded into the Pd complex by subjecting one side to 1 atm D2 and the other side a vacuum.  The deuterium entered the Pd complex, separated into individual deuterons in the complex and then recombined into D2 on the other side.  Target elements were deposited on the Pd complex using different means.  In more than 60 trials, a Cs -> Pr transmutation was seen, with nearly 100 percent reproducibility.  In three cases, a Sr -> Mo transmutation was seen, with ratios of isotopes of Mo different than found in nature.  The existence of Pr was checked using several methods.  The Cs -> Pr reaction appears to have occurred in a thin surface region.  A positive correlation was seen between the rate of conversion of Cs into Pr with deuterium flux through the Pd complex.

It was necessary to infer the Sm from the presence of elements of weight 150, and two other possibilities were ruled out.  In the Ba -> Sm transmutations, there was an increase of atomic mass of 12 and atomic number of 6.

How important are unnatural ratios of isotopes? What is a Mossbaur isotope?  Assuming the data are accurate, how impressive would these results be to a specialist in the field?, thread 1

The context of this thread was one of Andrea Rossi's experiments.  There was a great deal of skepticism.  A recurring theme was that without a paper to be found in a respected journal, it's likely that cold fusion is no more than a fantasy.  There was an interesting discussion of muon-catalyzed fusion, but the discussion seemed to conflate muons with heavy electrons (although I may have misunderstood).

One of the participants was a physicist who had sat in on a 2003 colloquium of mostly Russian scientists on the topic of cold fusion.  They were describing some interesting results, but he took it all as an indication that they were essentially incompetent and that the sciences were suffering for want of capable people.

There were few calculations concerning energy and none concerning transmutations.  Overall the thread dealt more with the scientific process in general (in an indirect way) than the specific claims relating to LENR.

Tuesday, December 20, 2011

The NASA slides

  1. Dennis Bushnell, "NASA and LENR,"
  2. Gustave Fralick et al., "LENR at GRC,"
The first set of slides were prepared by Dennis Bushnell, chief scientist at NASA Langley Research Center, and provided to Steven Krivit on November 25, 2001, by way of a FOIA request.  Bushnell's slides are the work of a man who is convinced that LENR is real.  His list of applications is fairly fanciful and raise the question of whether he might be a little credulous.  He suggests that since 2006 the LENR theories have begun to favor weak force interactions over fusion proper.  He mentions in passing a work by Zawodny et al.

The second set of slides are by three individuals at NASA Glenn Research Center.  The slides describe several experiments.
  • One that was carried out in 1989, Fralick, Decker and Blue (1989) NASA TM-102430, presumably at NASA, used a Johnson Matthey HP Series palladium membrane hydrogen purifier.  They saw no neutrons, and a 15 C temperature increase when deuterium was used and no increase when hydrogen was used.
  • Another, J. Niedra, I. Myers, G. Fralick and R. Baldwin (1996), NASA TM-107167, looked at an H2O-Ni-K2CO3 system, using an inactive cell as a control.  This experiment was negative.
  • A third experiment looked at thin palladium films.  Craters were found in D20 and none were seen in H20.  John Wrbanek, Gustave Fralick, Susan Wrbanek,  & Nancy Hall “Investigating Sonoluminescence as a Means of Energy Harvesting,” Chapter 19, Frontiers of Propulsion Science, Millis & Davis (eds), AIAA, pp. 605-637, 2009.
The authors of the second set of slides were involved in some of the experiments the slides describe.  After 1989, LENR was studied primarily at Navy, DARPA and various university labs (not NASA).  Work at NASA started back up in 2009.  There was apparently a positive finding in a 2009 experiment, although this is not apparent from the graphs in the slides.  Publications mentioned include ones by Parmenter and Lamb; Chubb and Chubb; Maly, Vavra and Mills; Widom and Larsen; Hora and Miley; and Kim.  There may be a proof of concept by Mounir Ibrahim, a professor at Ohio State University, using a Stirling engine.  There is a set of full references at the end of the slides.

Sunday, December 18, 2011

Interesting links (2)

Following are some interesting links found in the Oil Drum thread examined in a previous post, included here for later reference.

  • Y.A. Baurov et al., Experimental investigation of changes in beta-decay count rate of radioactive elements. Phys. At. Nucl. 70(11), 1825–1835 (2007)
  • Kim, Y., "Generalized Theory of Bose-Einstein Condensation Nuclear Fusion for Hydrogen-Metal System," Purdue Nuclear and Many Body Theory Group (PNMBTG) Preprint PNMBTG-6-2011 (June 2011).  Available at
  • J.S. Brown, "H-H dipole interactions in fcc metals," arXiv:cond-mat/0703715v4 [cond-mat.mes-hall].  Available here: and (the latter is a cold fusion source). 
  • Campari et al., "Surface Analysis of hydrogen loaded nickel alloys," on
  • Vargas, P. and N.E. Christiansen, "Band-structure calculations for Ni, Ni4H, Ni4H2, Ni4H3, and NiH," Physical Review B, February 1, 1987.  Available at
  • Sargoytchev, S. "Theoretical Feasibility of Cold Fusion According to the BSM Supergravitation Unified Theory," December 14, 2011,
  • Krivit, on McKubre's results,
  • Robert Mockan,
  • Aleklett, "Rossi energy catalyst – a big hoax or new physics?"  Blog post with comments.  Aleklett is a professor at Uppsala University, Sweden.
  • NyTeknik article on Rossi's device:
  •, including a discussion of beta decay
  •, concerning surface plasmons.
  • Rossi's Italian patent:
  •, concerning the Italian patent.
  •, one man's attempt to investigate the LENR claims and counterclaims.  See also
  •, a bibliographic reference for a paper on laser-driven fusion.
  • and, NASA slides concerning a positive LENR experiment (graphs look funny, though).
  • NASA patent,
  • Francesco Piantelli's patent,  Dietmar's description of Piantelli,
  • The NASA slides (1),
  • The NASA slides (2),
Objections that were raised
  • Widom-Larsen: inverse beta decay is endothermic, and there is a 780 keV threshold that must be overcome.
  • Widom-Larsen: inverse beta decay at 780 keV lasts on the order of ys, where nuclear phenomena occur on the order of fs.
  • Widom-Larsen: its prediction of He-3 runs counter to the He-4 ash that is actually observed.
  • Widom-Larsen: it depends on a surface oscillation mode that cannot exist.
  • LENR: very noisy data.
  • E-Cat: calorimetry results based on outgoing steam.
  • E-Cat: fraudulent demonstration, carried out with a fission rather than a fusion (or chemical) reaction.
  • E-Cat: the reported energy density would destroy the device.  (The reports violate conservation of energy.)  One poster says this objection is due to a mistaken assumption of perfect efficiency.
Links to industry
  • Leonardo Corporation, Rossi's company:
  • Ampenergo, which will receive royalties for E-Cat sales in the Americas,
  • Defkalion Green Technologies, Rossi's rivals in Greece,
  • One poster reports that nickel ore is found in mainland Canada, Australia, New Caledonia, Cuba, Indonesia and Greenland, with the largest deposits in mainland Canada.

The Oil Drum, thread 1

Thread at The Oil Drum,

This thread is a fairly lengthy discussion, and it covers a wide range of topics, including specific calculations of isotope ratios, various scams that have been perpetuated in the past, the energy required or released by different reactions, the relative merits of some of the explanations that have been offered for LENR (including Widom-Larsen), and the economic implications if Rossi's device were to turn out to be legitimate.  One of the participants was a contemporary of Fleischmann and knew him personally, and many of them had attempted to carry out their own Pd/D replications after the 1989 announcement.

There was an interesting discussion of whether H+Ni -> Cu-63 is endothermic or exothermic.  The conclusion was that it is probably significantly exothermic in this case.  An important point related to the atomic mass of copper-63, which was subsequently the topic of a question on physics.SE.  There are two numbers that have been given for the mass of copper-63, one of which would make the reaction endothermic and the other exothermic.  The heaver mass appears to be in error.  The participants calculated the mass decrease to be .0061 GeV/c^2, or 6 MeV, which would be a significant amount of energy released into the system.  A poster mentionend that the ratio of nickel-62/nickel-64 is important, and that Rossi claimed to be using enriched nickel.  A point mentioned in passing was that the rates of decay of some elements are not always constant.  In the course of the discussion someone made use of Wolfram Alfa to carry out a computation of the resulting mass of a reaction.

One of the participants researched and taught quantum physics at Oxford for two decades.  He had met Focardi and believed that the chances that Focardi and Rossi were perpetuating a scam were small, whatever else could be said of the E-Cat.  He believed that there might be something that is actually happening in the device.  This is his summary the key detail of a paper by J.S. Brown that he cites in support of a possible fusion reaction:
The author doesn't actually make the key point very clear, but the reason why he predicts fusion is because in the near zone, the dipolar attraction pretty much cancels the monopole repulsion.  You are correct that this is a negligible effect and you still have one H in each interstitial cell.  There is nothing to see from a classical chemical perspective.  But, crucially the (classically non-existent) tails of the quantum gaussians will run into each other without the rapid attenuation you get normally due to the e^2/r monople repulsion.  Geddit ? 
Various examples of bad science were discussed at different points:  Blacklight Power, Terawatt Research, Steorn's Orbo Technology, Scalar Waves, Deflagration Guns.  One poster compared the consistency of Focardi's explanation since 1994 to that of someone offering a consistent prediction over time for the coming of the Rapture.  A link was given to a 2009 interview of Gary Taubes, who has written a book on the premise that cold fusion is an example of bad science.

There were some interesting back-of-the-envelope calculations on the economics of Ni-H power production if one proceeded from the figures reported by Rossi and Focardi.  One participant said that 2.5 g  nickel would yield 250 kWh, enough for the needs of one living a western lifestyle each day, and that it would not significantly diminish the supply of nickel over time for such power to be provided in mass quantities.

The overall tone of the thread was balanced.  Some people were clearly skeptical, while others were less so.  One of the more skeptic participants had this to say about the 1989 Pons and Fleischmann experiments early on in the thread:
As it turned out, there was no such effect and no actual fusion.  Fleischmann and Pons had just discovered some kind of weird chemical reaction that made it looked like fusion was going on, and their continuing attempts to promote it just amounted to self-delusion.
I can appreciate that people will differ on the 1989 results, but I find this level of confidence that Fleischmann and Pons were wrong a little difficult to understand.

Kowalski, "Rossi's reactors—fiction or reality?"

L. Kowalski,  "Rossi's reactors—fiction or reality?," March 18, 2011 (draft note, to be submitted to a publication).

Summary: The levels of isotopes reported in connection with in Rossi's 12 kW E-Cat do not agree with those that would be expected to occur from the fusion of hydrogen and nickel.


Kowalski comments on the reported results of two demonstrations of the 12 kW E-Cat device that took place, one on January 14, 2001, at the University of Bologna, and another on February 10-11 (also at Bologna?).  He raises these issues:
  1. The energy of the protons is too low to fuse with nickel.
  2. In a thought experiment where there was no Coulomb repulsion, the reactions that would occur between protons and nickel would result in unstable isotopes of copper, and after six months they would have decayed into two isotopes of nickel.  There would be no abundance of copper.
  3. Also, the isotopic composition of nickel would change drastically in order to generate 12 kW over six months (and presumably it has not in Rossi's device).
  4. There was no radiation above the cosmic background levels, but it would be expected for the types of fusion reaction considered.
Kowalski sees a path to stable copper isotopes via intermediate decays but finds it inconsistent with the natural ratios of the relevant nickel isotopes.

Kowalski's reasoning is largely from theory to empirical evidence rather than being about the specific procedures that were used to obtain the evidence.  His note essentially says, "Rossi's device is not doing what we would expect it to do."  The reader is left to conclude that some huge error is taking place in the report on the isotopes, or possibly worse, although Kowalski does not offer this suggestion.  Kowalski allows in an addendum that only independently performed experiments can confirm or refute the question of whether Rossi has built a new kind of reactor.

I like Kowalski's analysis because, even though I'm not yet able to form an opinion on some of critical assumptions he is making, such as the precise reactions that would need to be taking place in Rossi's device, Kowalski is dealing with concrete numbers concerning nickel and copper isotopes.  He also raises some interesting questions in the first part of his note.

Additional details that were mentioned:  The energy of the protons in Rossi's demonstrations are thought to be at around 0.04 eV (how was this number derived, and what assumptions go into it?).  Others who have studied fusion of protons with nickel were working with protons at much higher levels, such as 14.3 MeV.

Interesting links

Following are some links relating to LENR which I'm taking down for later reference.

Longer threads and refutations
  • "Rossi's reactors—reality or fiction?",
  • Comments to
  • Iwamura et al., "Observation of Nuclear Transmutation Reactions Induced by D2 Gas Permeation Through Pd Complexes," in Eleventh International Conference on Condensed Matter Nuclear Science, 2004.  Available at  Provides evidence of transmutations.
  • Gai et al., "Upper limits on neutron and γ-ray emission from cold fusion," Nature 340 (1989) 29–34. Available at
  • "Forbidden Science" (book).
  • Storms, "Status of cold fusion (2010)", Naturwissenschaften (online) 97 (10): 861–881.
  •, mentioned by Steven Krivit in connection with the video below.
  • Szpak, Mossier-Boss, and Gordon, "Polarized D+/Pd–D2O system: Hot spots and mini–explosions," in Tenth International Conference on Cold Fusion, 2003.
  • Physics ArXiv Blog, "How to Transmute Elements with Laser Light," February 24, 2011.  Discusses this paper:
  • Dufour et al., "Synthesis Of A Copper Like Compound From Nickel And Hydrogen And Of A Chromium Like Compound From Calcium And Deuterium," in Proceedings 
of 8th International
 in Hydrogen
Metals, 2007.
  • "Band Structure, Spin Splitting, and Spin-Wave Effective Mass in Nickel," Phys. Rev. B 1, 305–314 (1970).  Talks about heavy electrons?
  • Focardi and Rossi, "A new energy source from nuclear fusion," March 22, 2010.  Available here:
  • Lochak and Urutskoev, "Low-Energy Nuclear Reactions and the Leptonic Monopole," Condensed Matter Nuclear Science (book).  Paper available here:
Evidence for
  • Transmutations.
  • Electrodes in a cell hotter than the electrolyte, contrary to the usual situation. (Steven Krivit, video.)
Evidence against
  • LENR: Isotopes in purported transmutations (e.g., in Rossi's device) are at natural levels.  (Ron Maimon.)
  • LENR: Fusion of H+Ni -> Cu is endothermic. (Maimon says this is incorrect.)
  • Rossi's 1MW demonstration:  faulty calculations
  • Video, "2005 - U.S. Navy SPAWAR San Diego LENR (Cold Fusion) Research Lab: Infrared Measurements", (Steven Krivit.)
  • Rossi's US patent application:
  • Radio interview with Focardi:
  • What a different 12KW generator looks like in action:

Saturday, December 17, 2011

Some basics of nuclear physics

A nice primer on the basics of nuclear science can be found here, on Lawrence Berkeley National Laboratory's Web site.  Wikibooks also has a helpful article on nuclear structure. The following discussion is an attempt to summarize as best I can some of the important points I picked up from reading these and related articles.

The electromagnetic force governs the attraction and repulsion of charged particles, e.g., electrons and protons and protons with one another, but also that involving more exotic particles, such as a muon with a proton.  The strong force governs interactions between nucleons—neutrons and protons.  The carrier of the electromagnetic force can be thought of as a virtual photon.  The force carrier for the strong force is a pion.  Because pions are massive, the strong force is effective only over short distances, and because photons are nearly massless, the electromagnetic force operates over long distances.

The strong force holds nucleons together in a nucleus, but the electromagnetic force repels protons from one another.  In a stable nucleus with small atomic mass, the ratio of neutrons to protons is around 1.  In a stable nucleus with larger atomic mass, the ratio moves towards 1.5.  The nuclear stability curve plots out the ratio for stable isotopes.  The larger the nucleus, the more neutrons are needed to counteract the electromagnetic force and keep the nucleus from fissioning into smaller nuclei.  All elements up to iron are generated in a star like the sun.  Fusion is exothermic in these instances.  Elements of atomic number greater than iron (26) are created in supernovae, where there is sufficient energy to result in the fusion of heavier elements.  At some point past iron along a curve of the binding energies for the various elements, fission becomes exothermic rather than endothermic, and fusion becomes endothermic rather than exothermic.  (This is relevant to the discussion of the creation of copper from the fusion of hydrogen and nickel.)

There are three types of radioactive decay that occur in nuclei that arise from different transitions that can happen within them.  Alpha decay involves the emission of a helium nucleus.  During alpha decay, transmutation occurs and the atomic mass and the atomic number (number of protons) decrease.  Beta decay is caused by the weak force and involves the decay of a neutron into a proton, an electron and an electron antineutrino.  Beta decay also results in transmutation, yielding an increase in the atomic number.  For example, carbon-14, an isotope of carbon with a half-life of thousands of years, beta decays to become nitrogen-14.  Evidently the proton is not ejected in the process.  A third type of radioactive decay is gamma decay, which involves the emission of a gamma ray photon.  It occurs when a nucleus transitions from a higher energy state into a lower energy state.  One explanation was that the neutrons and protons move around into a more stable configuration.  Importantly, radioactive decay pertains to the nucleus and not the orbiting electrons.  If I understand what is going on, in beta decay, for example, the electron comes from the nucleus itself and not one of the orbits.

The mass of a neutron is slightly heavier than that of a proton.  So when you have beta decay, and the neutron changes to a proton, there is a release of energy which is carried by the nearly massless electron and the electron antineutrino.  Energy at these levels is measured in electron volts, eV, and mega-electron volts, MeV.  Energy in the realm of electron volts is found in chemical reactions, and energy in the realm of MeV is found in nuclear reactions.  An electron volt is the energy required to move an electron through a potential difference of one volt.  In many contexts energy is used almost interchangeably with mass.  Einstein's famous relation E = mc^2 governs the translation between energy and mass when the actual unit of measure is of interest.

Inverse beta decay is the "decay" of a proton into a neutron, with the emission of a positron and an electron neutrino.  In contrast to beta decay, inverse beta decay requires energy rather than releasing it.  Inverse beta decay occurs through the electron capture process, or K-capture.

Binding energy is the energy required to hold a nucleus together.  In cases where the mother nucleus in a decay has greater overall binding energy than daughter nuclei, kinetic energy will be released.  In cases where the mother nucleus has less binding energy than the daughter nucleus (or nuclei?), presumably kinetic energy will have been captured, although I did not specifically read this.

The physics.SE posts

I just came across this post on a site I hadn't noticed up to now:
I'm quite familiar with StackOverflow, a predecessor to this site.  StackOverflow is an essential resource for software developers, and I suspect that this physics site will be helpful here.

The replies to the above post make for interesting reading.  The original poster asked what went wrong in the experiments that underlay claims of cold fusion. Following are some of the main points that people responded with:
  1. Any sort of fusion will require that nuclei overcome Coulomb repulsion.
  2. Calorimetry of the kind Pons and Fleischmann were attempting is difficult to do right, and they were subtracting a large number from a large number.
  3. The results could not be reproduced by other scientists.
One of the longer replies was from Ron Maimon, a member of the site who is more sympathetic to the validity of the LENR experimental results.  He does not have a PhD in physics and is self-taught, and his reply only got one vote.  But his reputation is in the top 0.3 percent, and he seems to have some knowledge of the topic (a lot of which mirrors details in Mallove's and Beaudette's books).  He referred to the process described in Widom-Larsen as "inverse beta decay."  He thought this particular explanation was erroneous for several reasons:
Weak Force Neutron production:  The Widom Larson theory claims that it is possible for a proton and an electron to do inverse beta decay on the surface of a metal, where there are large local electric fields.  This is preposterous, because of the MeV difference in proton and neutron mass.  It requires millions of volts to accelerate an electron to enough energy to be able to do an inverse beta-decay, and such energies are not available on the surface of a metal.  Further, this theory will predict transmutations of plus/minus one mass unit predominanatly, which is not observed, and does not explain how a deuteron can absorb an electron.
While Maimon's explanation was interesting, his review of the existing theories, including Widom-Larsen, seems a little cavalier.  This article about work at Brookhaven National Laboratory talks about the formation of heavy electrons under extreme conditions, which act as though they were hundreds or thousands of times heavier than normal electrons.  My question is whether this detail is relevant to the point Maimon was making; what if, for example, the "coherence domains" Widom and Larsen talk about provide the necessary energy to accelerate the electrons to a mass that will trigger inverse beta decay?  And what does Maimon mean, exactly, when he says that "such energies are not available on the surface of a metal."  I would also like to know more about the objection concerning transmutations.  Presumably the objection about deuterons would not apply to an Ni-H system.

The most popular reply (nineteen votes) included a reference to a 1989 article in Nature by Koonin and Nauenberg discussing how much the electron mass would need to increase for cold fusion to occur at rates corresponding to claimed observations.  What is the connection to electron mass when we're talking not about inverse beta decay but, instead, normal fusion?  Or maybe they were referring to something similar to Muon-catalyzed fusion?

Tuesday, December 13, 2011

Extraordinary claims

What are we looking for in seeking to come to some kind of verdict on the claims that have been made over the past twenty years relating to purported LENR phenomena?  What are the nature and implications of such a verdict? And what do we mean by LENR, exactly?

Anyone who takes even a little interest in the field will no doubt have his or her own criteria and standards for adjudicating the claims.  A physicist or chemist will have a demanding set of requirements and will set a very a high bar before allowing something onto the record as evidence.  A child will ask his or her parents what they think about it.  And an adult who is not an expert in some relevant field will have yet another set of criteria, particular to him or her.  The requirements of the scientific community will obviously differ from those of the public.  As Charles Beaudette points out in his book Excess Heat, science is not a democratic endeavor, to be sorted out through a poll of various scientists, let alone of the general public.  Researchers try to remain aloof of fads, politics and public opinion and, to a certain extent, of the opinions of one another.  This kind of professional independence is crucial to the success of scientific activity.

But research is a profoundly human affair, as Kuhn and Feyerabend have shown.  It has its own fads and politics, and occasionally there is a shakeup, where something that was discounted or overlooked is seen in an entirely new light, and new luminaries arise.  Not only does the public have an interest in scientific activity, then, it also has its own side to its relationship with the scientific community. Ideally that relationship would not be allowed to devolve into a paternalistic one, like that of a parent to a child.  The public should not uncritically receive the criteria to be used in judging the merit of a claim, to be handed down to them from experts.  In allocating public money, members of funding institutions must arrive at criteria of their own, suitable to the purpose at hand, for determining whether a new area of research is promising and should receive money.  And while academics must remain free to pursue knowledge for its own sake, independent of whether it is encouraged by public institutions or not or shows much promise of yielding practical results in the near or medium term, public institutions in their turn should give the general welfare a central place in arriving at decisions.

A refrain sometimes heard among skeptics of the LENR research is that "extraordinary claims require extraordinary evidence," a phrase that goes back to Carl Sagan, and, before him, Marcello Truzzi.  What this seems to mean in such a context is that it doesn't matter how much incremental evidence has been gathered in support of the purported effects; in order for such individuals to be convinced, there must be evidence that is an order of magnitude or more above statistical error, something seen that is undeniable.  I take no position at this point as to whether such evidence has already been provided and is simply being ignored.  And perhaps we can allow that such a standard of evidence would be a proper one for researchers in many cases.  Although I must admit that the "extraordinary claims" phrase begins to take on the sound of a kind of dogma and its implications become less and less clear after hearing it several times, I can appreciate where the LENR skeptics are coming from and why they would be inclined to adopt such a standard for themselves.  There are many instances in the history of science where there has been a report of some remarkable breakthrough that attracted a lot of attention but which later turned out to be unfounded.

Be that as it may, the standard of evidence that I propose for the public and for funding agencies is a very different one.  Rather than requiring extraordinary, unimpeachable evidence, I propose that what is needed instead for evaluation of the LENR claims by the general public is a simple, straightforward, prima facie case for its existence in some form or another.  If, after looking at the evidence, one decides that there is good reason to believe that something unusual and promising is going on, then this should be quite sufficient as far as funding and related decisions are concerned.  And in the calculations that are used to build or undermine such a prima facie case, it will not be necessary to bring to bear the full arsenal of statistical analysis; well-conceived back-of-the envelope arithmetic will be fine.  Those among the public who did well in their college physics and math classes will be more than adequately prepared to assess whether such a prima facie case exists.

And what is it that we are looking for, precisely, when we talk about "low energy nuclear reactions"? How we answer this question is important.  Following are some possibilities:
  1. Significant levels of fusion, in which two atomic nuclei overcome the Coulomb barrier to fuse together to form a new element and release gamma rays, neutrons and helium -- sometimes called "nuclear ash."
  2. The formation of so-called heavy electrons, which give rise to inverse beta decay and then a cascade of unstable isotopes followed by their decay into stable isotopes.
  3. Energy emerging from the system above and beyond that put into it at levels that cannot be accounted for by a chemical reaction of some kind.
  4. A process that can be repeated and verified in other labs.
For which kind of evidence should we be looking in determining whether there is a prima facie case?  Items 1 through 4 are not necessarily mutually exclusive, and I think we should be looking for all four of them.  If for some reason we must choose between them, however, following Beaudette we should not hesitate to adopt item 3 as our fixed point.  In this regard, I argue that for our purposes it matters not in the slightest whether sufficient evidence is found in support of items 1, 2 or 4.  If we find enough promising evidence for item 3, we should be confident at that point that we will have established a prima facie case for LENR, against all claims not directly relating to item 3 to the contrary, even by those with far greater qualifications in physics or chemistry than us.  For even the most qualified of individuals can become fixated on a detail of derivative importance which he or she has mistaken for the primary touchstone when a phenomenon is poorly understood.  And we should be like the honey badger in approaching vocal criticisms of this methodological decision.  In insisting on item 3 like this, we are not doing anything particularly controversial or radical; we are simply keeping attention focused on the principle of conservation of energy.

Regardless of what we find in connection with item 3, it is possible that we will come across solid evidence of items 1, 2 and 4 as well.  Clear results supporting 1 and 2 would also be sufficient to establish a prima facie case.  (Item 4 isn't really of the same type as the others, but it seemed like a good detail to mention.)

An initial reading list

Following are some notable articles and books on LENR mentioned in Charles Beaudette's Excess Heat that I hope to read:
  • Fleischmann, Martin, Stanley Pons, Marvin Hawkins, and R.J. Hoffman, "Measurement of Gamma-Rays from Cold Fusion: [two items]" (Nature, 339, June 29, 1989), p. 667.
  • Fleischmann, Martin, and Stanley Pons, "Our Calorimetric Measurements of the Pd/D System: Fact and Fiction" (Fusion Technology, 17, 669, July 1990).
  • Wilson, R.H., et al., "Analysis of Experiments on the Calorimetry of LiOD-D20 Electrochemical Cells" (Journal of Electroanalytical Chemistry, vol. 332, 1992), pp. 1-31.
  • Fleischmann, Martin, and Stanley Pons, "Some Comments on the Paper Analysis of Experiments on Calorimetry of LiOD/D20 Electrochemical Cells, R.H. Wilson, et al." (Journal of Electroanalytical Chemistry, vol. 332, 1992), pp. 276.
  • Huizenga, John R., Cold Fusion: Scientific Fiasco of the Century, 2nd ed. (New York: Oxford University Press, 1993). First published in 1992.
  • Nature, Editorial, "Farewell (Not Fond) to Cold Fusion" (Nature, 90/03/29), p. 365.
  • Taubes, Gary, Bad Science: The Short Life and Weird Times of Cold Fusion (New York: Random House, 1993).
I recently read these publications, which are also good to mention here:
  • Mallove, Eugene F., Fire From Ice: Searching for the Truth Behind the Cold Fusion Furor (Infinite Energy Press, 1999).  First published in 1991.
  • Hagelstein, Peter L. et al., "New Physical Effects in Metal Deuterides" (Washington: US Department of Energy, 2004).  Available here:
Eugene Mallove's Fire From Ice and Charles Beaudette's Excess Heat, which are great resources, surely fall under the category of alternative literature.  It is suggestive of how much the field of cold fusion has fallen into disrepute that these books are self-published (as in Beaudette's case) or put out by an alternative energy publisher (as in Mallove's).  I'm just starting Steven Krivit and Nadine Winocur's The Rebirth of Cold Fusion, a third history in the genre.

Monday, December 12, 2011

My own hunch

In the previous post I wrote about my great surprise at finding that claims of cold fusion, or "low energy nuclear reactions," as the field is now generally called, live on.  Since then I've done a little reading and have begun to become acquainted with some of the history of the last twenty years.  What I have learned in the past few weeks is far outweighed by the work that remains in order for us to properly assess whether the LENR claims amount to anything.  But already I have begun to get a sense of where my own sympathies lie.  In this regard I am with Arthur C. Clark, who, in the preface to Krivit and Winocur's Rebirth of Cold Fusion, wrote, "I cannot quite believe that hundreds of highly credentialed scientists working at laboratories around the world can all be deluding themselves for years."

My sense is that the skeptics overreacted in 1989 to some errors that were made and failed to give due consideration to some of the other evidence that had been presented, and that, since then, they have continued to speak against the LENR research with a confidence that is not justified by the evidentiary standards that the physical sciences impose on other types of claim that are made from time to time.  I am neither an electrochemist nor a physicist—I work with software.  But I am starting to be led by a purely formal analysis of their reasoning to the conclusion that they have been systematically talking past the important issues.  In any other field of knowledge one would probably do well to defer to the experts.  On this particular topic, however, one becomes quite reluctant to give committed critics among physicists (and they might be in the majority) much deference.  For reasons that are unclear, they do not appear to have approached the matter with the attention, care and objectivity that it requires.

In the breach, an option that is available is to make a best effort at becoming acquainted with the details and reasoning of the experiments.  It's not an approach that is assured to lead to greater clarity, but we can at least give it a try.

Sunday, December 11, 2011

Cold fusion debunked?

Up until October of this year, I thought the matter of cold fusion was settled.  There had been a big  controversy over it in 1989, and then the whole thing died out.  Martin Fleischmann and Stanely Pons, two electrochemists at the University of Utah, had been precipitate in their announcement at a press conference that they had discovered a type of fusion that took place at room temperature.  Their claims were debunked and science moved on to other things.

So I was surprised to see a tweet go over Twitter not too long ago about a demonstration that took place on October 28, 2011, of an E-Cat, or Energy Catalyzer, a "cold fusion" device built by Andrea Rossi, an inventor from Italy.  There was a link to a video that purported to show a 1 MW version of the device that he had set up to demonstrate the technology to a potential buyer:

I was fascinated that cold fusion would be in the news again.  Rossi was claiming that his E-Cat was delivering more power than was being put into it.  And, irritatingly, he was not cooperating with observers who wanted to take more careful measurements of the device than had been previously carried out in order to verify his claims.  He seemed to be willfully thwarting their efforts.

There was a group of people who were commenting on the demonstration who all appeared to be dilettantes or associated with alternative energy Web sites.  Apparently the demonstration was one of several that had ben carried out during 2011.  Rough calculations by one of those present indicated that the device had put out not 1 MW but 479kW, in terms of power beyond that used to keep the device going.

Initially I assumed that this was all a relatively insular affair.   There was the cagey inventor with his cold fusion device and several fringe Web sites that were pursuing the story with a mixture of hope and skepticism.  I probably would have set the matter aside entirely but for a 2011 Wired article on the E-Cat.  After reading the article I started looking into the subject of cold fusion and found another Wired article, from 1998, that went into some detail on the topic.  The 1998 article described a small group of people who had continued work on cold fusion long after it had been abandoned by mainstream science.  These people were not all on the margins of their professions.  There was an electrochemist at SRI International, a research company associated with Stanford; a professor at MIT who had done ground-breaking work on x-ray lasers; and a professor at the University of Illinois who had been editor of a mainstream academic journal called Fusion Technology, among others.  Then there were the various groups in Japan, Italy, Israel, China and France that were working on cold fusion, or "low energy nuclear reactions" (LENR), as it was now being called, and hints that the US military and NASA might be monitoring developments in the field (which later turned out to be the case, at least with respect to NASA).  What was behind all this interest?  Had cold fusion in fact been debunked, as I had understood up to that point, or was it possible that there might be something to it after all?

Things got weirder when I learned that two of the observers at the October 28, 2011, E-Cat demonstration were with respected sources.  One was Mats Lewin, editor of NyTeknik, a Swedish technology weekly, and another was later said to have been Peter Svensson, a technology reporter with the Associated Press.  While Lewan was reporting frequently on the E-Cat, Svensson has not yet, as of this writing, filed a report.  When the AP article was not forthcoming, I started feeling antsy, perhaps the feeling a journalist gets when he or she is not getting a full account.  There were missing pieces that I was having difficulty filling in.  Why was the AP withholding the story?  Was it not possible to make a minimal statement of fact about what was seen at the demonstration, even to the effect that nothing could be verified? (If the AP connection is true, I now suspect they were reluctant to be associated with the story.)

Around this time I started feeling that the topic warranted further investigation.  The question of the legitimacy of Andrea Rossi's E-Cat is one that remains to be seen.  A related but different question is whether the LENR researchers have been on to something during the last twenty years.  Perhaps they have been mistaken all this time, or perhaps they have been conducting real science.  What seems clear is that either some of them are competent and have been gradually uncovering something new (it's not clear exactly what at this point), or they are not unlike proponents of theories about UFOs and are all engaged in what has been called "pathological science."  Given the fairly stark difference between the two situations, I'm hopeful that the question is one that can be resolved, despite a lack of expertise in the relevant science.