A Universe in the very small

The past few years have seen the news peppered with articles about the sub-subatomic: elementary particles, more elementary than protons or neutrons, that make up everything. These particles have only been able to be studied under extreme conditions, where energies equivalent to a million billion degrees are looked down upon as too weak.

So where do those plain old protons and neutrons that nuclear engineers work with fit in?  What about that big table of isotopes ordered by neutron and proton number?  Are they so well-understood and completely figured out that there’s nothing else to learn from these old-fashioned friends?  Despite the idea that protons, neutrons and electrons are not that special, it is they that determine chemistry, nuclear energy, and nuclear medicine, at least on earth.

Chart of all nuclei organized by number of protons (y-axis) and number of neutrons (x-axis).  The colors represent decay modes, i.e. particle emitted when the nucleus decays.

In fact, at these relatively low energy domains (at least 1000 times less energetic than the energy required to investigate sub-atomic particles), there are still many standing paradoxes, some almost a millennium old.  Many of these are discussed in detail in Norman Cook’s book Models of the Atomic Nucleus (second edition, Springer 2010).  I want to point out just a few.

Finding a Model

Perhaps the most clear sign that the domain of nuclear physics is not all figured out is the fact that, not just several, but 30+ different models are applied to explain different aspects of the nucleus.  They can be broadly classified into three or four, but all of them have stayed around to some extent because each has been able to explain some part of the experimental data, but not one of them accounts for it all.  Is the nucleus like a liquid, solid or gas?  Or is it akin to the quantum mechanical electron cloud?

For example, the nucleus modeled as a liquid drop has been used, with some corrections, to calculate the binding energy, related to how much energy will come out of a nuclear reaction. The actual values have been gathered from experiments from the 1930s-50s.  The reason for the values has not been that clear.  The model which has most accurately accounted for the curve has been the liquid drop model.  This model, however, does not account for another very well known characteristic about nuclear reactions.

Nuclear fission, the splitting of one atom into two or more, almost always occurs asymmetrically.  That is, the two pieces that a large nucleus like uranium breaks up into are almost always a large and a small piece and not two nearly equal pieces.  The liquid drop, and many other models can be massaged into giving asymmetric pieces but do not account for the fact that fission occurs primarily asymmetrically.  For this, Cook proposes his crystalline lattice model.  Another curious aspect of the distribution of fission fragments of various nuclei is that the larger nucleus is always centered around 132 - 140 nucleons, while the smaller seems to adjust to maintain an approximately 2:3 proportion.  What is so preferable about this nucleon number?  That remains unexplained.

Fission products (before decay) of uranium-235.  If the decay products were the same size, this curve be one hump centered around 117.  The x-axis is atomic mass units (number of protons plus neutrons). credit: wikipedia user JWB


Fission products (before decay) of uranium-235, uranium-233 and plutonium-239.  Notice that the larger piece centers around 140 atomic mass units for all three. credit: wikipedia user JWB


Here is another basic property, studied since the beginning of nuclear science. Natural radioactive decay occurs by three modes: alpha, beta+, and beta-.  Beta+ and beta- are positively and negatively charged electrons.  Alpha radiation is composed of helium nuclei: two protons and two neutrons.  A chart of all nuclei and their decay modes makes the natural tendency for transition clear.  The black are stable nuclei.  Above the line of stable nuclides, atoms lose a positive electron, and therefore a positive charge, becoming an element lower in the periodic table and also moving closer toward stability.  Below the line of stable nuclides, nuclei tend toward the opposite route, losing a negative charge, thereby creating extra positive charge, and becoming an element one higher in the periodic table.  The next most common form of decay is alpha radiation (yellow).  This is the decay output of uranium for example.  Aside from these, there are a few nuclei which output protons or neutrons but almost all of such nuclei live for less than a millisecond.

Why would an atom release an entire cluster of two protons and two neutrons and not just a single one of either?  Other related data include the fact that the helium nucleus appears to be smaller than expected for the number of particles it has and that many, but not all, nuclei with atomic numbers that are multiples of four are abundant (He-4, C-12, O-16, Ca-40 etc).  These anomalies have led to many models of nuclei composed partially or entirely of alpha particles.  Cook proposes a model in which the core and the outliers are alpha particles.

Approximate abundance of elements in the Solar System. credit: wikipedia user 28bytes


A more recent body of evidence is even more difficult to explain: the processes broadly categorized as low energy nuclear reactions.  We learn about this by doing experiments in which palladium (or nickel-palladium) electrodes are immersed in heavy water (water with deuterium replacing normal hydrogen).  In the experiments, an electric current is used to help deuterium be absorbed into the palladium lattice until the lattice is saturated.  Elements are measured before and after.  The results are astonishing.  Beginning with a particular distribution of isotopes of palladium, all distributions change.  Beginning with just four elements (palladium, platinum, heavy hydrogen and oxygen) many more elements emerge: transmutation at low energies.

Changes in palladium isotope abundance due to electrolysis. credit: Mizuno, 1998


Nuclear reaction products reported Miley and Patterson (1996) using a platinum anode and nickel-palladium cathode. credit:Miley and Patterson, 1996


Some of the elements are palladium or platinum plus a proton and neutron (heavy hydrogen), but others are about half of either, alluding to fission.  No neutrons are detected, as there is in fission of uranium or plutonium, but heat not accounted for by chemical reactions of the constituents is.  How could a hydrogen get from the metal lattice into the palladium nucleus?  Neutrons were originally used to transmute elements because neutrons, being neutral, easily penetrated the positively charged nucleus.  But to cause a positively charged nucleus (like heavy hydrogen) to combine with another nucleus, it is generally accepted that a much larger amount of energy, usually provided by an accelerator or very high temperatures, is required.

Could some kind of resonance phenomenon be at play here?  Cook points out that low energy nuclear reactions could be the most fruitful field for exploring the structure of the nucleus.

Other Avenues - Harmony of the World

How would a Kepler or Mendeleev approach this problem were either alive today?

Why one model over another, other than that the facts fit?  There is at least one other time in history that is well known where the models fit the data almost perfectly and could be infinitely adjusted to accommodate observational accuracy, and yet something fundamentally wrong about the universe was being assumed.  The case that comes to mind is that of Ptolemaic epicycles.  Claudius Ptolemy had the sun and planets orbiting the stationary Earth and accounted for planets sometimes going backwards by having the planet travel on a small circle which travelled on the larger orbit.  Finding that there were still differences to account for, he moved the planets’ orbits off-center from the Earth.  Since that still wasn’t good enough, he added another point that wasn’t the Earth or their orbital centers, that would control their motions.  When Nicolaus Copernicus developed his sun-centered system, he changed the physical description, allowing the Earth to move, but added even more circles than Ptolemy had.  His model matched the planets’ actual positions better than did Ptolemy’s, but how much was from putting the Sun in the center, and how much from adding more circles and having more observations?  Kepler’s one-time employer, Tycho Brahe, developed a system in which the sun and moon circled the stationary Earth, with the planets going around the moving sun.  He had fantastic observations, and his model was even better at predicting where the planets would be seen than Copernicus’s.  It was numerically superior, even though it was physically wrong, by our standards.  Better observations, and more adjustments, make for better models.

In this way, any anomaly could have been accounted for, and more circles could always be added, if necessary.  But what about reality?  Kepler introduced four (at least) fundamentally new, universal concepts into astronomy:

1) Kepler introduced physics.  He was the first, not Copernicus, to propose that the sun actually moved the planets, and that therefore motion must be accounted for relative to the sun, not just around it.

2) The universe is not based on uniform motion but instead upon constant change.

3) The parameters determining the orbits of the planets are not arbitrary, but depend on musical necessity.

4) Humans have a unique isomorphism with creation such that they can continually come closer and closer to know the cause of things and act on the basis of that knowledge.  In this way, Kepler is the father of science as we know it today.

An accountant or pure mathematician perhaps might argue that numerically Kepler only accounted for a few decimal places more of accuracy.

Dr. Robert Moon of the Manhattan Project had asked precisely the question I asked above.  How would Kepler have approached the paradoxes posed by the nucleus?  From this he offered what is now known as the Moon Model of the nucleus, which constructs the various nuclei from embedded platonic solids.  Dr. Moon also had a hypothesis that the nuclear decay, considered a stochastic process (random with some direction), may in fact be due to forces we have not yet investigated, perhaps in the very large, such as cosmological processes.

Embedding of platonic solids used by Dr. Robert Moon to model the atomic nucleus.  The platonic solids are the only regular polyhedra that exist, an indication of the topology of space.  Kepler used an embedding of platonic solids to approximate the distances of the planets known to him.  The closure of each solid are at oxygen, silicon, iron, and palladium.  Dr. Moon uses two of these structures to account for nuclei up to uranium.


This hypothesis was partially confirmed by the work of Simon Shnoll, who showed with meticulous measurements, that the fine structure of atomic decay, previously considered to be totally random (stochastic), revealed periodicities that correspond to daily, lunar, solar and other cycles. (See Cosmophysical Factors in Stochastic Processes, Simon Shnoll, American Research Press, 2009)

He also looked at a domain which is very lightly touched, namely life.  He first saw these variations in life processes as a chronobiologist.  There is some evidence, though sparse, that life discerns nuclear differences with a finer toothed comb than non-life processes, and not just based on mass differences.  There is some even more sparse evidence that life might transmute elements.  Just as life processes are very picky about the handedness of its molecules, which are chemically indistinct, could there be a nuclear graininess which life recognizes that is not accounted for in physics?

Surprise!  We don’t know everything!  What new physical principles await, which provoked by the paradoxes in the very small, tell us something about how the universe is fundamentally organized?

Showing 1 comment

Please check your e-mail for a link to activate your account.