Ultrawave Theory

advertisement
Ultrawave
Theory
A New Understanding of the Quantum
UT
2
This book is dedicated to the entire McCutcheon clan; the descendants of John “Pioneer” McCutcheon, who
settled in Virginia around 1730. He had the foresight to bring his family to America so that we, his
descendants, could have the best possible lives in one of the best countries in the world.
UT
3
Acknowledgements
Many thanks to all of those who have read and commented on versions of this work while in progress.
Special thanks to the guys who are now with, or were formerly with, Teledyne Energy Systems in
Baltimore. They are: Dave Svrjcek for some useful criticism and helpful advice, Robin Franke and Tom
Hammel for math assistance, Rhett Ross for encouragement to persevere, and Mal McAlonan for significant
feedback and much appreciated advise concerning my treatment of E=mc2, as well as discussions on other
esoteric subjects.
Special thanks to those whose input forced me to face the fact that I couldn’t dance around the issue
of what it means to replace Einstein’s E=mc2. I had hoped not to offend everyone by saying Einstein was
wrong immediately at the beginning of the book, but that is what I must do. Replacing E=mc2 then forces
replacement of other quantities such as Planck’s constant with a new definition. This type of input forced
me to look harder at the coincidental reasons for correct answers being obtained when using the SI system.
This led to the realization that the definitions of the SI units makes them special, and that is why they were
able to be used to represent the different units that Einstein applied. Once the units were set, everyone that
followed had to try and fit their work into those units, which is why some items now look simpler, since
they no longer need to be force fit into that unit system.
There is not enough that can be said about institutions like NIST, WebElements.com, or Physics
arXiv for having so much information readily available for everyone to use. Thanks to all of the scientistauthors who have given their views of the current state of their own specialties. And, a hearty thank you to
all those science writers who provided the nontechnical articles that kept me apprised of the current work in
particular fields of cosmology and particle physics during the past four decades. Many of the articles with
especially intriguing subjects led me to research those subjects further.
All of the information gathered to form the Standard Model (SM)—the most accepted current belief
of how the universe operates—was the invaluable resource from which all of my ideas evolved. Without all
the diligent scientists who contributed to the measurements and theories, there would have been nothing
from which to create ultrawave theory. It has been said many times and in many ways that everything that
has ever been discovered has come from, or was at least inspired by, the work of previous generations of
scientists. I know this to be true; and to build upon the work of others is to believe in what they have done,
bestowing upon them a well-deserved show of respect. Anyone trying to replace this body of work with
something completely different, or with ideas that are at odds with its conclusions, are almost certainly
wrong. Most of the conclusions of the current paradigm of physics are correct; it is just some of the ideas of
why they are correct that are misguided.
Although ultrawaves can be used to replace many of the stranger ideas used by relativity and
quantum theories, there are a lot of theories within the SM that work just as well within ultrawave theory
constraints. There are some theories that can only be called inspired, since they were derived from
misguided beliefs. I have much admiration for what Einstein accomplished with so little information when
developing his relativity theories. For those who developed workable theories like Quantum
Electrodynamics and Quantum Chromodynamics based on such an intrinsically flawed database of ideas, it
UT
4
was truly an amazing feat. And finally, without the existence of string theories to pave the way, it would
have been next to impossible to have explained ultrawave theory in a reasonably understandable manner.
Last, but certainly not least, the ring theory of the electron is a theory that was discarded because a
vital piece of information that makes it work was missing. Lord Kelvin and all who followed his original
track, especially Arnold Sommerfeld in the 1920’s, would not have believed that a velocity as great as
8.936E+16 m/s existed, or that it could explain all of the measured data about all matter particles. Who
would have considered such a thing a century ago; certainly not Einstein, as his solution was to limit
everything imaginable to the speed of light. Even with a lot of evidence that the ultrawave velocity exists,
which can explain in a physical manner many of the features of our universe that are otherwise explained
only with near magic-like theories, it is still a difficult concept for most physicists to consider as a
possibility today.
[For those of you who do not like the idea of having to wade through a lot of math, I confined the
equations necessary to define ultrawave theory to a special chapter that does not require you to read it for
you to become familiar with the workings of the theory. There are some equations in the other chapters, but
they are there merely to demonstrate the simplicity of the math needed to describe the theory, or as
contextual media. I believe that anyone who is truly interesting in understanding ultrawave theory will be
able to do so with only minimal trips to a dictionary for obtaining definitions for some of the terms used.
This simplified understanding of the nature of physics should help in advancing the curriculum of learning
institutions in a manner that permits more specialized degree programs in the many branches of physics.]
UT
5
Table of Contents
INTRODUCTION
CHAPTER 1
7
10
WHAT IS WRONG WITH THE STANDARD MODEL?
CHAPTER 2
14
WHAT ARE ULTRAWAVES?
CHAPTER 3
21
CONSTRUCTING ULTRAWAVE-CREATED PARTICLES
CHAPTER 4
30
PARTICLE MASS, PLANCK’S CONSTANT, AND THE NEUTRINO
CHAPTER 5
37
THE CONNECTION BETWEEN PARTICLE CHARGE AND ANTIMATTER STATUS
CHAPTER 6
45
WHY ONLY THREE PARTICLES?
CHAPTER 7
53
ARE MASS INCREMENTS THE KEY TO NUCLEI?
CHAPTER 8
57
RECONCILING MAGNETIC MOMENTS WITH PARTICLE SIZES
CHAPTER 9
68
CREATING ALL SPIN TYPES OF ATOMS
CHAPTER 10
81
REDEFINITION OF PARTICLE EQUATIONS;
FIXING CONSTANT CONSTRAINTS
CHAPTER 11
90
WHAT ARE PHOTONS?
CHAPTER 12
99
PROOF
INTERLUDE
114
CHAPTER 13
115
CREATION
CHAPTER 14
122
ELECTRICITY AND MAGNETISM
CHAPTER 15
SPACE AND TIME
126
UT
CHAPTER 16
6
133
GRAVITY
CHAPTER 17
144
OTHER MYSTERIES
CHAPTER 18
148
LIFE IN OUR UNIVERSE
CHAPTER 19
151
THE FUTURE
APPENDIX
TABLE 1
List of applicable Physical Constants from NIST
Specialized Ultrawave Theory Constants
TABLE 2
Atomic Radii
155
156
158
UT
7
Introduction
“If I have seen further than others, it is by standing upon the shoulders of giants.”
Isaac Newton
This opening quote has been used many times in popular science books. It represents the epitome of the
scientific process. Previous work completed by others that can be tested and verified is the basis for nearly
all subsequent discoveries. Very little is ever accomplished without some else’s first step. All theories in
particle physics are built upon ideas from within other theories that have preceded it; ultrawave theory is no
exception. This borrowing of ideas and information is especially true concerning string theory and the ring
theory of the electron, both of which provide a basis for ultrawave created particles.
“Ultrawave Theory” is the culmination of decades of study and an intense desire to learn the truth of
how we exist. I whole-heartedly believe that I have found at least a glimpse of that truth. It has taken me
more than three decades of study and more than eight years of research and writing to be able to complete
the search and show the results. My goal is to convince you with the evidence provided that only ultrawave
constructed matter can reasonably explain our existence on the evolving world which we occupy. You need
not be convinced directly, but rather by providing a simpler definition of matter particles using the ideas
presented about how ultrawaves work, the task will complete itself.
If you think you already know a lot about the creation of matter, then prepare to be shocked. I am
about to replace everything you believe about particles and atoms with such a simple collection of new
ideas that you will wonder how no one thought of them before. Actually, someone had to a certain extent,
but it was a century ago, and an important part of the puzzle was missing. The math that describes matter
creating particles is so easily understood that it will only add to that amazement. Almost all fields of study
will be influenced by these ideas, even if there is no immediate impact on most of them. The fields of
particle physics and cosmology however, will require some immediate redefinition of dimensional terms,
and replacement of long established ideas.
It has long been taken for granted that the ultimate speed of the universe is the speed of light, which
is approximately equal to 300,000 kilometers per second (186,000 miles per second). While this does seem
extremely fast, it will be shown that it is far from the fastest speed possible. One of the two waves
mentioned herein is a normal light speed entity, but the other is superluminal (faster than light), and
propagates at such a rate as to make light seem like pouring thick molasses. Together these two types of
entities create the particles that produce the atoms from which all matter is created.
Normally, the mere mention of a faster than light object creates a conflict with relativity and is
reason enough to disregard it, but there is plenty of proof provided within these pages that shows why this
particular type of entity is an exception. The data presented proves not only that these entities are some type
of string-waves, but that they can effectively create all of the features that produce a self-consistent
universe. Included are suppositions that electricity, magnetism, light, gravity, and other less common
features of existence are also produced by these same entities, but in slightly different configurations from
UT
8
those that create the particles. These entities are simple, basic wave structures with a quantized nature, and
when combined with another type of light-speed entity, are the progenitors of all matter and energy.
Not enough progress has been made in understanding particles since the discovery of the electron by
J. J. Thompson over one hundred years ago. Ultrawave theory represents an important departure from the
current Standard Model of theories describing particle construction. The following theories will fully
delineate the construction of matter, and hence the true nature of the Universe. The theory corresponds
perfectly with all known observations. While personal interpretations of some of the data herein may vary
from what others may interpret it to be, the overall view remains consistent with the basic premise of a
string-like, wave-constructed existence.
In this book you will find a theory that works for all matter creating particles, known as spin-1/2
particles. Not only does it do it without modification for the different families of spin-1/2 particles, it can
also explain our entire existence through a simple and logical progression of the same basic ideas of how the
components of the theory operate in creating matter. In ultrawave theory, the composition of all forces and
other spin-value particles can be deduced from the same operations used to create spin-1/2 matter particles.
The best feature of ultrawaves is that they describe particles exactly, not close, not approximately, but
exactly, and much more simply than does QED. This is truly the first step in the holy grail of physics, a
Theory of Everything.
Replacing relativity and quantum theories with ultrawave created particles and energy produces a
platform from which a single theory emerges. Einstein’s relativity theories explain a lot about the behavior
of collections of particles within space and time, but have not been able to be combined with quantum
theory. Quantum theory defines the behavior of particles when interacting with photonic energy or with
each other. A key to eliminating any barrier that might exist for accepting ultrawaves is for relativity
proponents to understand how the idea of curved spacetime can be replaced by the strings and branes of
ultrawave theory, and for quantum theorists to understand how ultrawaves can supply a simple physical
system that creates all particles.
Ultrawave theory differs from Einstein’s theory regarding the definition of energy in the particle
realm—with his use of E=mc2—by introducing an entirely new interpretation of energy. It does not differ
from quantum theory or relativity regarding the behavior of three-dimensional matter particles in the atomic
world, nor in regards to the behavior of large collections of matter within spacetime. As an example of the
differences that do come from this redefinition of particle energy, photons are not actually a form of energy,
but rather a new form of matter particle. In fact, all particles in the atomic realm have mass in ultrawave
theory. Not only does this aspect of the theory provide answers to many particle physics questions, it also
provides answers to many cosmology questions. This may sound outrageous to most physicists, but it makes
perfect sense when all the evidence is given and the theory details can be seen as a complete recipe for
matter and energy creation.
Even though there are many past experiments that can only be explained by some form of faster than
light (FTL) entities or processes, there is an unfortunate resistance to anything that is based on, or even
mentions, the existence of FTL travel, especially if it involves a new theory. Because the information
contained in ultrawave theory is of such import, and is such a significant departure from established science,
UT
9
I felt that publishing outside the physics community would be the best, if not the only way to get all of the
information to everyone who may be interested in learning about this much simpler way of understanding
matter particles and their interactions with so-called energy particles. This information has been provided to
Universities and laboratories throughout the world and it is up to them to decide whether to use it or not.
At first glance, ultrawave theory seems counterintuitive to those who believe in relativity theory. My
hope is to convince everyone that ultrawaves better explain the limitations of relativity and that the two
theories are not mutually exclusive. The large number of correct answers I obtained left no doubt in my
mind as to the validity of ultrawaves. I want to give everyone interested in particle physics the opportunity
to view the concepts of this type of superluminal wave theory and its ability to determine particle values,
allowing each person to make up his or her own mind about the existence of these waves. I believe there are
many unanswered questions that can benefit from superluminal wave theory. I found it most imperative to
write this book to move the process along for answering some of those questions.
Because I need to include equations for those who desire them, and to prove that the system works, I
have included Chapter 12, which is simply called “Proof”. It contains just the required mathematics, and
immediately follows Chapters 1 through 11 that address how ultrawave theory answers simple questions
about matter and energy that the Standard Model either cannot answer, or does not answer adequately.
These preceding chapters are necessary in setting the background for understanding how ultrawaves work.
They are written for the less knowledgeable person who is not an expert in the fields of particle physics or
cosmology, but is still interested in those subjects. I hope the descriptions are clear enough to accomplish
the goal of defining the questions, and that the answers are at least as clear. If you are not interested in
reading about advancements in science, then this book is probably not for you.
Some of you will automatically jump to the conclusion that there are dimensional inconsistencies
with my assertions, or other problems that will make you think this cannot be a reliable theory. I assure you
that I have addressed all of those considerations within these pages. It was not possible to do it quickly
enough to present the information in a publishable paper, which is the major reason for requiring it to be a
book. Furthermore, there are a lot of conceptual details that you need to incorporate into your thought
process to be able to view the theory information as objectively as required. After reading about ultrawaves,
if you still do not believe that it can be right, I want you to think about this question. If you believe that it is
only coincidental that all of the equations, which I had no part in deriving, work to produce an accurate
description of particles, and that the entirety of existence can then be defined using ultrawaves and just a
few ideas that fit within an ultrawave framework, why can’t it be done with only light speed entities and the
methods currently being used?
Even if you are steadfastly mired in the current paradigm, you know that it is fundamentally flawed.
All you have to do is open your mind to a few new concepts and you will see how superluminal wave
theory makes perfect sense, and provides the only reasonable explanation for the entire history of
experimental evidence about the subatomic realm. Ultrawave theory provides answers to many previously
unanswered questions and should persuade even the most skeptical of minds to its truths. To paraphrase a
sentiment often used by physicists and mathematicians, “It is so beautifully simple, it just has to be true.”
UT
10
Chapter 1
What is wrong with the Standard Model?
“The more success the quantum physics has, the sillier it looks. ... I think that a
'particle' must have a separate reality independent of the measurements. That is an
electron has spin, location and so forth even when it is not being measured. I like to
think that the moon is there even if I am not looking at it.” Albert Einstein
There must be something inadequate about the Standard Model, or else there would be no reason to look
elsewhere for understanding about our universe and its existence. The most basic aspect of defining our
existence is the explanation of creation of the fundamental material objects that comprise it. The Standard
Model view of electrons, protons, and neutrons, which make up all material objects, is poorly conceived.
First of all, there is no physical extent to electrons; it is as if they exist only as points. If you were to ask a
physicist how big a proton or neutron is, all he or she can tell you is that they are smaller than an atom, and
most likely don’t have definable boundaries. It just doesn’t make sense that all three of these particles
behave similarly, but according to current theory are supposed to be so differently constructed.
Another poorly conceived idea contained within the Standard Model is the introduction of a property
called spin. It is a behavior similar to how a normal object spins, but it is not described in that manner.
Instead, it is just an attribute that gives the particle an orientation with respect to a magnetic field, exactly
like an everyday magnetized object has, but somehow not generated in the same way. Even though electrons
are supposed to have no physical extent, they show the same behavior, which is the twisting that keeps them
aligned with the movement of a magnetic field, or as having a magnetic moment. Particle spin is a measure
of interaction momentum, but even that is misunderstood, since electrons also show the same type of
momentum transfer as protons.
The last and worst bad idea incorporated into Standard Model particles is the explanation that
electrons are single entities, but that protons and neutrons are made of smaller entities called quarks. The
creation of the quark was begun in 1961 by Murray Gell-Man and Yuval Ne’eman in response to the many
discoveries concerning particles that occurred in the 1950’s, after high-powered particle accelerators began
smashing electrons, protons and even whole atomic nuclei together. So many strange, new, and short-lived
particles were produced that theorists were struggling to come up with theories to explain them all. There
were also indications that only very small areas of the proton acted in repelling other particles that were
hurled at it. The two men came up with a pattern of mass and charge properties dubbed the Eightfold Way
(SU3 group), after an Eastern philosophy. Presumably, Gell-Man and Ne’eman felt that the symmetry of the
philosophy reflected the symmetry of particle families. Gell-Man theorized that matter was made of several
types of quarks that fit the pattern of the SU3 group, and was able to predict a missing particle that was later
found. He used the name quark, having seen it in James Joyce’s Finnegan’s Wake; it sounded as strange as
the particles that were being discovered. George Zweig independently came up with the idea and called
them aces, but the more unusual quark name won out.
UT
11
The different types of quarks required to create the wide variety of particles produced in accelerators
were given the names Up, Down, Top, Bottom, Strange and Charmed. Each quark has alter egos identified
by using the word color, although these colors have nothing to do with the actual colors of light that we see
with our eyes. The truly odd thing about quarks is the necessity to have them hold partial electrical charges.
For matter particles other than the electron, this means that each quark holds either 1/3 or 2/3 of the electric
charge of a charged particle. While a very disturbing way to explain electric charge, since only complete
charges are ever observed in nature, it continues to persist as a feature of quantum mechanics (QM).
Although quarks are an essential part of QM and work very well in describing particle interactions,
they have never been seen outside of an atom, so there is no proof that they exist. Quarks have been
ascribed the unusual property of being bound by an adjustable force. This force gets stronger the farther the
quarks get away from each other, which is why they cannot be observed outside of particles. If this sounds
odd, that’s because it is odd. The only reason for thinking of something like this is to explain why quarks
are not observed alone. They might as well have decided that fairy dust prevents quarks from breaking out
of particles. Unfortunately, Standard Model physics is full of unusual ideas like these and makes me wonder
why less strange concepts such as FTL entities are so abhorred.
Quark theory seemed to be more of a descriptive language than an actual discovery, but it worked
well enough at the time to predict more particles, some of which were later found in the ever-higher energy
collisions produced within the largest particle accelerators. Some of the newly discovered particles seemed
to be larger, heavier versions of the proton, electron and neutron. Because it didn’t seem that anything other
than solid objects could be responsible for creating all of the newly discovered particles, the quark was
accepted, warts and all.
Matter seems to behave in a most inexplicable manner at the atomic level, or that of the size of
particles and atomic nuclei. If you try to measure the velocity of a particle, you can’t determine its location,
and if you try to pin down its location, then you can’t determine its velocity. The reason is simple; looking
at atomic-sized particles requires using particles of light, which disturbs the inherent motions of whatever
you are trying to measure. Physicist Werner Heisenberg defined this behavior in what is now referred to as
Heisenberg’s uncertainty principle, even though it does not provide any further details about why it is true.
The strange behavior of matter particles led to the Copenhagen interpretation of quantum theory, one
of the oddest ideas ever considered as a possibility for how things operate within our universe. A group of
physicists, led by Neils Bohr and based in Copenhagen, Denmark interpreted the strange behavior of
particles as a true feature of existence. It was also partly based on the work of Erwin Schrödinger, who was
able to define matter with a wave function. The Copenhagen interpretation explains the underlying
instability of matter in a most bizarre fashion; that nothing exists as a wave or a solid until we observe it.
This is almost funny, as it seems along the same lines of belief as ghosts and extrasensory perception.
Luckily its influence only shows itself in certain aspects of study within the physics community. Still taken
at face value, Heisenberg’s uncertainty principle is one of the ideas that will be explained in a more
palatable manner within the chapters that follow, and the Copenhagen interpretation will be debunked
merely by the new understanding that ultrawaves provide.
UT
12
The introduction of probability into particle behavior is another problem for the SM, not that
probability isn’t needed; it is just that probability has corrupted the basic ideas of matter and energy.
According to Einstein’s E=mc2, energy can be converted into particles. If two particles are created
simultaneously, they may be ejected from the creation site in opposite directions with equal and opposite
momentum depending on the masses of the particles. For example, if one is heavier than the other then the
lighter one will have a higher velocity. It is purely a multiplication of the mass times the velocity; if one had
mass 3 and its velocity were 4 and the other had mass 4 then its velocity would have to be 3. According to
the Standard Model, some of the energy is used to create the particles and some is converted into
momentum. There is no exact predictive ability in determining what will come out of the energy conversion,
it is either experimentally measured or subject to a probability. There are many other ways in which
probability plays a role in Standard Model particles, which gives credence to these wrongheaded ideas.
Another aspect of particle behavior that causes problems for the Standard Model is the creation of
atoms. First, there is the idea that electrons somehow orbit protons. About the only evidence that supports
this claim is the “spin” referred to earlier, and since spin is a bad idea, because it should only apply to
objects with physical extent, the idea of orbiting electrons also becomes a bad idea. How would an electron
be attracted to a proton with such force as seen with electrostatic attraction, yet end up orbiting around it
rather than crashing into it? QED proposes that it is done by passing photons back and forth, but that doesn’t
make sense either. It would require that the photons stop their attraction at a specific distance and then
replace it with the opposite behavior.
As if electron orbiting weren’t bad enough, protons and neutrons are put together with a type of glue
called binding energy, which seems to have any value necessary to accomplish the task. This binding energy
is the answer to the question of why pairing protons and neutrons results in masses that are less than the sum
of the individual components. Not to mention the fact that the resultant sums can have any size necessary to
make the element (atom) without regard to the actual number of individual protons and neutrons used. It is
certainly convenient to be so flexible, but hardly worthy of praise if it is not explainable in simple terms.
Scientists seem to understand the limitations of the Standard Model, as it has not provided detailed
answers, only generalities. Because no one has been able to combine quantum theory with relativity theory,
theorists have looked down other avenues. As early as 1984, many physicists were taking a new look at an
abandoned theory of matter called String Theory. String theory seems to be an elaborate way of explaining
the effects of ultrawaves but without the benefit of directly using their extremely high velocity. Extra
dimensions are used, rolled into Planck-length (less than 1E-41 of a meter) strings embedded within our
normal three-dimensional space. The vibration pattern of the strings through these extra dimensions, plus
our normal three dimensions, determines what type of particle is created. The equations and ideas used in
string theory are so esoteric that it is very difficult even to explain what is being attempted. Even though
strings are the closest idea yet to the string-like waves of ultrawave theory, the advent of curled-up
dimensions (currently believed to be seven in number) necessary to make superstring theory work, are yet
another strange concoction that has not provided the necessary detailed answers, only answers that are also
general in nature. Many of the answers provided by these theories are nonsensical from the average person’s
perspective.
UT
13
Extra dimensions have taken physicists off into the netherworlds, where progress may only be at the
expense of even more bizarre ideas than those of the Copenhagen interpretation of reality. Superstring
theory attempts to create a base for our existence, unfortunately it will never get there without taking into
account the ultrawave velocity. Just because string theorists haven’t stumbled onto the ultrawave velocity
doesn’t mean that we should discount everything done with string theory. Mathematics is a language, and
superstring theory is like a special form of that language. As such, the latter may provide some key elements
into the questions of light, gravity and other subjects not dealt with fully in this book. It is disheartening to
think that all of the hard work by superstring theorists could go to waste. After all, the ideas of strings and
branes (short for membranes) originated with string theory, and they are the closest fit to the ideas involved
in how ultrawaves operate in creating matter and force.
The point of this chapter is to reveal how little the Standard Model actually explains clearly about
the workings of subatomic particles. Mostly it provides generalizations, some of which are very strange and
nothing like our personal experiences with nature. Sometimes it does not even attempt an explanation, it just
states that this is the way it is, so we must learn to live with it.
There are other types of particles that the Standard Model does not describe adequately. Eventually,
some of them will be discussed, but for now it is only necessary to discuss the matter creating ones. It is
these particles that are the most important, and the ones that can be proved as being created differently than
presently believed.
UT
14
Chapter 2
What are Ultrawaves?
“Scientific discovery and scientific knowledge have been achieved only by those who
have gone in pursuit of it without any practical purpose whatsoever in view.” Max
Planck
Before you can be told how to create matter from ultrawaves, it must be explained what ultrawaves are and
what they look like. The easiest way to do that would be to show them to you, but that is not possible
because there is no way to determine exactly what they look like. They operate on such a small scale that
even sub-atomic particles are huge by comparison. When these waves create matter, the large range of scale
between the sizes of the components also creates visualization problems. If any portion of a matter particle
is shown to a scale that would fit on this page, then the other features are either too large or too small to be
seen at the same scale. Only analogies can be used to describe the physical attributes of ultrawaves, and
even with measured values we can never be completely sure of their structure.
Some numbers will be provided for the actual sizes of particles, and a few scaled up examples using
items from everyday experience will be provided. These scaled up examples might give you a reference
point from which to compare the disparity in sizes in a way that is more familiar with what you know. All of
the sizes given pertain only to the particles; the sizes of the individual waves can be almost anything. One of
the waves is so small that there will never be a way to determine if our conceptualizations are even close to
correct. There are a few diagrams included that are out of scale, but they may not be any more helpful than
what your own imagination can supply.
In late 1991 three questions that appeared to have nothing in common, and which will be explained
in detail, led to the idea for ultrawaves. The first question was why Einstein’s equation E=mc2 was able to
define energy, and used the value c2 rather than some entirely different value. The second was why light
showed interference patterns in a double-slit experiment, whether it was a multi-photon stream or just a
series of single photons. The third was the evidence for an inflationary period in the earliest stage of the
origin of the universe. The key to answering all three puzzling items turned out to be the creation of a wave
traveling at approximately 8.936E+16 meters per second (m/s) that travels in a circular path.
“But”, you retort,” didn’t Einstein prove that things like ultrawaves are impossible; that nothing can
travel faster than light?” Fortunately, the answer is no. There is a loophole for exceeding Einstein’s speed
limit that he was not aware of, especially since it was not an item of consideration at the time of his
proposal. If an entity is only two-dimensional (2D), rather than three-dimensional (3D), it can be exempt
from the c speed limit. This means that while everything is created by 2D ultrawaves that are exempt from
normal matter behavior, when the waves do eventually form three-dimensional entities, these entities must
then abide by the rules of relativity. The 3D behavior of matter is well understood, it is the 2D behavior of
ultrawaves that has gone either unnoticed, or has been misinterpreted.
UT
15
It is the coincidental fact that ultrawaves travel at nearly an exact multiple of the speed of light (a
power factor) that have kept them hidden. This seems like an unusually convenient coincidence, this
number being a near numerical equivalent to the square of c—only in the SI unit system—, and it is until
you examine exactly why these numbers fit together the way they do. It has always been assumed that SI
units are arbitrary, but they are actually based on some common factors. Some reasons will be given in later
chapters as to why SI units are like this, but for now just assume that this is true. Even if it were assumed to
be coincidental, these values permit using an extremely simplified set of equations to perfectly define spin1/2 particles, which is amazing in itself. Evidence of the “almost” power factor will be examined more
closely in Chapter 6. For now the concentration will be on the behavior of these 2D waves that defy light
speed, and how they are incorporated into branes to create a 3D existence.
Ultrawaves can explain odd behaviors of material objects that are not easily understood in current
theories without resorting to some very unusual ideas. The wave nature of material objects for instance,
seems more natural if you believe from the start that everything is created from waves rather than solid
objects. Some stable atomic elements can diffuse into each other over time, so the very idea of solidity
seems at odds with atomic behavior. Why is this true? The answer is that solidity is not real but only an
illusion. It is due to our limited perception of reality, which is based on our ability to see only light and feel
infrared radiation and matter but nothing more. It makes sense because we are made of 3D matter and are
subject to its nature. The idea of how objects can appear to be solid even if they are not should become
clearer as you incorporate into your consciousness what ultrawaves are and how they work.
What changes to current theory are required to prove that the simplest particles of matter are created
with waves rather than solids? On the macroscopic level, there appears to be no significant differences
needed. It makes just as much sense, if not more, to believe that the wave aspect of atomic matter is its one
true nature and that there is no solid nature. Although the wave nature of matter is not a new idea, the idea
that only a wave nature exists and that there is no solidity is relatively new.
It is obvious that there must be a way in which the combined wave speed of 8.936E+16 meters per
second (m/s) and 3E+8 m/s can work together to create particles of matter while still remaining within the
lower light-speed limit. It is clear that if the speed of light is perceived as the fastest speed possible then that
speed must somehow control the perceived speed of the ultrawaves. This realization led to the idea of how
two waves with very different speeds could travel together. Once the numbers were fitted to this particular
idea, it was found that they fit this type of behavior exactly. Now, all that is required is to describe how
these two different types of waves create matter.
The two wave types that create everything in our universe are given the simple names W1 and W2.
The W1 wave behaves very similarly to a sound wave, or like a compression wave that propagates
spherically. The speed of propagation of the W1 wave is equal to the speed of light. One major difference
between this wave and a normal compression wave is that the usual compression wave we are familiar with
needs a medium in which to propagate, but the W1 wave to some extent creates its own medium. This wave
type can be equated with the 2D brane model of spatial structure currently receiving much theoretical
attention.
UT
16
The entire universe is composed of curved 3-D geometry. Curved 3D geometry can be simulated by
taking a positively curved surface such as a globe and drawing a triangle on it. If you measure the angles of
the triangle and add them up, you get more than the usual 180 degrees. For example, a smaller globe will
have a higher measured angle between any two legs of an equilateral triangle than a triangle of an
equivalent leg length when measured on a larger globe. Because the universe has stretched to such a large
volume, the W1 waves in a free state would be nearly flat geometrically. When a W2 wave combines with a
portion of a W1 it should produce a nearly flat geometry. Spacetime would have been highly curved at the
beginning of time, the beginning of time being what is usually referred to as the “Big Bang”. Any curved
geometry that is referred to here is automatically 2D because it is measured in only two dimensions at a
time, and is wholly dependent on the amount of 3D curvature that exists during the relevant era of interest.
Higher or lower dimensional entities do not have to fit a relativity-derived rule set. Being only twodimensional, ultrawaves exist beyond relativity constraints with no contradictions, since nothing detectable
exists with regards to a 3D velocity. All velocity measurements must be made in relation to W1 waves,
which are only traveling at the speed of light. Measurements could also be in relation to a collection of
matter, which will always be traveling at a speed slower than that of light. No matter how one looks at it, all
matter depends on the brane velocity, which is limited to c and controls how the W2 ultrawaves are
presented to an observer. It was this lack of information that something existed outside the 3D environment
that led Einstein to make his mistake regarding the nature of energy.
When ultrawaves create a particle, they present us with a way of looking at matter using a physical
domain description that is 2D wave-based with a quantum nature, rather than the 3D object based quantum
nature supplied by the Standard Model. There is nothing in relativity theory that relates to any dimensional
constructs beyond 4D spacetime—three physical dimensions plus time. In that respect UT is better than
other string theories, because it is the only one that fits within that constraint. What a 2-D nature means to
ultrawave theory is that special rules have to be followed to create a 3D universe. The following are the
basics in the form of a few assumptions to provide limitations and expectations.
Assumption 1: 2D cosmic membranes of various dimensions, along with 2D ultrawaves, define the size and
shape of the universe. These branes and strings are the W1 and W2 waves, but in a state of
existence independent of matter.
Assumption 2: Brane length and width are indeterminate at present and may be variable, or at least are
limited to the size of the Universe.
Assumption 3: The natural propagation velocity of a brane is c, but may be less depending on whether it has
the ability to interact with another brane.
Assumption 4: A 2D string-like wave (ultrawave) is attached to these branes in either one or two
dimensions. One-dimensional interaction is not directly detectable.
Assumption 5: 2D ultrawaves have a natural circular motion under all possible conditions, tending to return
to an origin point, but can be displaced in both linear and radial directions via natural or
artificial motion or acceleration of the W1 brane to which it is attached.
UT
17
Assumption 6: The natural propagation velocity of all ultrawaves is approximately 8.935915E+16 m/s. This
new velocity is given the identifying symbol Cc.
Assumption 7: Neither branes nor strings may be created, destroyed, nor have their velocities altered.
(Recent evidence suggests that the velocities may change slightly over huge time spans, but
other reasons for this evidence have been proposed that may make the point moot.)
Figure 1.1
A W2 ultrawave is much more difficult to describe than a W1 wave; not only because of its
incredible velocity, but because of its extremely small size and its unusual behavior of always moving in
circles. If viewed parallel to its direction of propagation, the W2 wave might best be visualized as being
similar in shape to a sine wave. It appears as if it rides the face of a W1 wave and behaves as if it were
UT
18
tethered to a string attached to a pin that is in the center of a disc. Because of the large velocity of the W2
wave, it will traverse an extremely large diameter as compared to the linear distance traveled by the W1
wave, which is limited to light speed.
Being linked to the W1 wave, the W2 wave shares its trajectory during propagation. It is this pairing
that allows the waves to create particles. In effect, the two will create a corkscrew-like construct, with an
average thickness equal to the amplitude of the W1 wave. The advancing wave may look something like
Figure 1.1. It is important to realize that nothing can destroy the W wave propagation; it is merely the waves
themselves that alter each others course, wavelength, or amplitude.
The difference in scale between the W1 wavelength and the overall diameter created by the extreme
speed of the W2 wave is difficult to comprehend. As an example to aid in visualization, if the W1
wavelength were equal to the thickness of a human hair (.01”), the circumference of the disc representing
the W2 wave would be almost 1863 miles. To have any hope of conveying what these waves might look
like, the scale of thickness to diameter must be greatly exaggerated. The smallest range of scale is exactly
1E+7 between the components. The exponent for the W2 wave speed compared to the speed of light, which
is represented by the thickness in Figure 1.1, is about 3E+8. Examples like the one above will be provided
in some cases to make it easier to visualize the large disparity in sizes.
It is difficult to show visually how the actual waves appear. Instead of the single hard-looking
medium shown in Figure 1.1, there would be two distinct features. The inside would be more like a flat
membrane than an edged surface. All of the action would take place on the outside edge where the W2 wave
resides. Resembling a rippling filament, it would progress so quickly around the periphery that the middle
would seem to stand still, similar to the way a second hand sweeps out a complete circle while the hour
hand doesn’t seem to have moved. Because of the speed differential, the ratio of the frequency components
between the two wave types is also 3E+8. It is hard to show this in a physical manner, especially when it is
only conjecturing that it truly looks this way. It is not as important for the exact construction of the waves to
be understood at this time as it is to realize that the math that describes this particular set of physical
attributes actually works to allow particle creation.
The two examples in Figure 1.2 are single-wavelength physical representations. The one on the left
is derived from Figure 1.1, where the internal W1 wave acts as a lateral restraining device holding the W2
wave radially at its leading edge. The figure on the right is different in that it depicts something like a
standing wave in the middle. Although it is not easy to imagine how W1 and W2 waves are interconnected,
or even how the waves advance, it must be something like this, since the numbers only work to describe this
type of physical relationship.
The Standard Model has adopted the position that physical meaning is not necessary in describing
particles, which was most likely the beginning of its downfall. How can entities so random and elusive, as
described by quantum theory, produce such exact replicas for each particle type? Only some sort of physical
system is capable of making all electrons identical, all protons identical, etc. The views shown are only a
first approximation of what the actual physical system entails. It is not as important to fully define the exact
system of operation as it is to prove that there is a workable physical system that perfectly describes matter
particles.
UT
19
Figure 1.2
The amplitude of the W1 wave represents a time-distance relationship that will be multiplied by the
speed of light, approximately 3E+8, and by the number 8.936E+16—both in m/s—to reveal the properties
of the interactions of the two waves. After determining the proper form of the equation that establishes
whether or not ultrawaves exist, the first possible time-distance relationship that presents itself as a key to
deciphering particle construction is the mass of the electron. From that first mass increment onward, all
mass increments can be used to determine particular aspects of matter. However, only certain values are at
points where the electrostatic and magnetic forces interact in a positive way to allow particles to form.
The particle constants found on the National Institute of Standards and Technology (NIST) website
are the ones used to generate nearly all of the data contained herein. Their values seem to be the most
consistent, and the most accurate. [See Table 1 in the appendix for a list of all applicable constants.] The
units used in this book are the international system units of meters, kilograms, seconds, and amperes. The
remaining three fundamental constants Kelvin, Mole, and Candela are not needed.
Except for the difference in operational speeds, it is interesting that the ideas presented in this work
resemble the ring theory of the electron. First formally proposed in 1915 by A. L. Parson, the ring theory is
still being pursued today by a few faithful followers. Unfortunately, all of the work currently being done on
the ring theory uses only the speed of light, which carries with it all of the limitations that have plagued all
other atomic theories. The original concepts used to devise ultrawave theory were made without the detailed
UT
knowledge supporting ring theory; however, some of the same basic equations are needed to describe
particle features. Once it was discovered that ultrawave theory resembled the ring theory, the equations
were applied. Without these equations, which are based on macroscopic electromagnetic principles,
ultrawave theory would never have developed into a real description of the creation of matter.
20
UT
21
Chapter 3
Constructing Ultrawave-Created Particles
“The mathematical framework of quantum theory has passed countless successful
tests and is now universally accepted as a consistent and accurate description of all
atomic phenomena.” …”The verbal interpretation, on the other hand, i.e. the
metaphysics of quantum physics, is on far less solid ground. In fact, in more than
forty years physicists have not been able to provide a clear metaphysical model.”
Erwin Schrödinger (about 1960)
Almost unbelievably, combining the W1 and W2 waves into a single unit can explain all of the observed
characteristics of particles. The electron, proton and neutron, for example, can be described using the
interactive combination of just these two waves in one simple configuration that forms another simple
configuration. Now it becomes clear that what were once thought to be quarks are just the localized points
of interaction of ultrawaves in their smallest dimensions.
One of Einstein’s major contributions to physics, and for which he is best known by the layman, is
his introduction of the equation E=mc2, which states that the energy content of a particle can be considered
as equivalent to its mass content. It evolved from experiments obtained through working with radioactive
atoms and the energy released from radioactive decay. The first premise for the introduction of ultrawaves
is to provide another interpretation for why Einstein’s energy equation works. The interpretation is that this
equation used for particle energy is a misapplication, so that the c2 number does not actually represent a
velocity squared, but rather the single higher-level velocity of ultrawaves, now called Cc. The form of the
equation that replaces Einstein’s is identical to that of a type of momentum, and the units for Cc are meters
per second, instead of meters squared per second squared.
Since the units that Einstein used made sense the way he defined them, it was related to matter in
that manner. If he had applied a velocity to the value and thought about how it might have worked out, he
most likely would have developed ultrawave theory himself. Unfortunately for him, there was a large
amount of evidence that nothing was capable of traveling faster than light, and some of the other ideas that
are influential now were not available to him at that time. Under these circumstances, it would not be fair to
fault him for his conclusions. After all, his equations do work for energy, and quite accurately, since the
difference between the number c2 in SI units and the Cc velocity in SI units is very small. Unfortunately, the
energy misapplication has resulted in a compounding of wrong ideas about particles and atoms.
If you consider what happens at extremes, or if you were to translate to a different set of units, some
odd things happen with Einstein’s equation. First, if you were to give the speed of light a value of 1, E=mc2
then has the same numerical value as E=mc. Granted the units are different, which is what you will be told,
but it causes all kinds of havoc later down the line in the form of infinities that arise. If c is a speed limit
then what does it really mean to have c squared? It seems to make no sense, which is the reason it makes
more sense to replace c2 with a single number representing a velocity.
UT
22
Ultrawaves show that matter’s mysterious ability to convert to energy is an illusion resulting from
the erroneous concepts that the speed of light is the ultimate speed, and that a velocity close to the speed of
light squared is not possible. The equivalency of mass and energy turns out to be just simple momentum
conversion and vector reorientation. The disintegration of matter into observed energy involves the collision
of two W2 waves traveling at the 8.936E+16 m/s speed. Particles can only interact with other particles,
whether they are alone or as part of an atom, so there are always two clashing W2 waves during destructive
interactions. When you calculate momentum for two colliding waves from two different particles, the
resulting units remain the same, since momentums only sum.
Real problems arise, however, if you try to mix kinetic energy and particle mass energy, which has
been the big stumbling block with the SM. Kinetic energy is calculated differently from W2 wave
momentum in UT, because it concerns the linear motion of the particle as a whole as it moves through
space, and has nothing to do with how the waves create the particle. For kinetic energy, the maximum speed
that can be used is the speed of light. Even though the maximum kinetic energy value of a particle is twice
its equivalent mass energy, it represents something completely different, and the two energy types cannot be
mixed. When referring to a particle being accelerated through space, it must be made clear that the W2
wave is not being affected in how it creates the particle. It is this secondary momentum (kinetic energy)
when applied to matter particles that is the basis of all activities the average person deals with daily.
Only the head-on collision between two waves of two different particles can generate the mutual
destruction of those particles, and at the point of destruction the waves must alter their directional
characteristics. Only this has any bearing on the idea of rest mass to energy conversion. Any other type of
energy transfer between particles is due to normal sub-light-speed momentum transfers, just like sideswiping collisions between automobiles. Because the waves that create particles never alter their natural
velocity, they must take on different forms of expression after a destructive collision.
The amount of mass that a particle contains, namely that indicated by Einstein’s E=mc2, is referred
to as its rest mass. Only rest mass has any relation to energy conversion in the particle realm. The mass of a
particle never truly rests, so any statement about rest mass is a little misleading, and a cause of much of the
confusion surrounding the ultrawave idea. Since ultrawaves are always rotating in two directions, mass can
never be considered as being stationary. It is this simple misunderstanding about mass that has kept mass
and energy separated for the last century, when in fact they are just two characterizations of wave motion.
When forming a particle, the W2 wave spins about the surface of the W1 wave and makes a disc
shape as depicted in Figures 1.1 and 1.2. This disc, which is traveling at the speed of light, then forms
another circle. Any circle that rotates around a central point forms what is known as a ring torus, or more
commonly, a donut. The torus created by the W waves essentially rotates at the speed of light, but we never
notice it because it remains stationary relative to other matter and/or space. The only exceptions to this
stationary rotation are for motions due to charge, magnetism, temperature, or kinetic energy transfer.
As Einstein proved, you cannot accelerate something that is already traveling at c to a secondary
velocity of c, even though he did not realize that was what he was doing at the time. Because ultrawavecreated particles have a component that is already traveling at the c velocity, it would have to overlap itself
to be accelerated to a secondary velocity of c, and that is not possible. It is the relativity between moving
UT
23
objects that is important here, since all motion must be measured relative to something else. In this case, if
no other matter or spacetime existed then its velocity would be irrelevant. Motion with respect to another
object, as well as the constant velocity of c measured the same by all non-accelerating reference frames, are
essential points of Einstein’s special relativity theory and one of the main reasons for his speed limit of c.
None of these restrictions affect the ultrawaves themselves, only the matter that is produced by them.
The constant motion of matter creating waves answers many mysteries. Heisenberg’s uncertainty
principle, for example, states that the location, direction of motion, and speed of a particle are not
simultaneously determinable. This makes particles appear to violate all known aspects of classical physics.
The uncertainty issue is resolved by understanding that a matter particle’s natural rotation at the velocity of
light can explain its behavior of having sudden motions in apparently random directions that appear to be
instantaneous, but are really only temporary directional changes. There is also the possibility of motion at
the W2 wave speed during destructive collisions, which would definitely seem instantaneous.
The question now is “how do you make matter from ultrawaves”? The smallest particle of matter is
the electron. Its mass value has been experimentally measured to be 9.10938291E-31 kilograms, with an
uncertainty of plus or minus forty in the last two digits. Because mass is an experimentally measured value,
we can use it as a standard to test the accuracy of any particle theory, which we will be doing shortly. Most
of the calculations mentioned in this chapter were first applied to the electron, as it has the most easily
obtained information about its various properties. The calculations were then applied to heavier particles,
and it was found that they work exactly the same way to provide values for those same properties.
Because of the immense difference between the two wave speeds involved in creating particles, the
size of the cross section of the particle torus compared to the overall diameter is great. The smallest
difference between two cross sections in the particle realm just happens to belong to the electron, which is
approximately 1.2955E-21 of a meter for the spin radius versus 1.2955E-14 of a meter for the overall torus
radius. To give you a better idea of the magnitude of this difference, if the cross section of the torus of an
electron were the size of a dime, then the overall diameter would be almost 110.5 miles. So creating an
electron can be likened to placing a string of dimes end to end, forming an enormous circle. Another point
to keep in mind is that the thickness of the dime is also related to its diameter in this same manner;
therefore, if the dime is .7 inches in diameter, then its thickness would be only .00000007 inches.
Representing the configuration of what will be called a W4 wave, Figure 3.1 is a solid torus shape,
with only a small portion of it shown as a screw-like structure, the same as in Figure 1.1. The name given to
the disc structure created by combining a W1 and W2 wave is a W3 wave. After this wave rotates into a
torus shape, it is referred to as a W4; all spin-1/2 particles are W4 waves. Although the scale is completely
unrealistic and the whole circumference would look the same, Figure 3.1 does give a sense of what the main
body of a spin-1/2 particle looks like.
The single-wavelength disc of the W3 grows in length at the speed of light, while the much faster
propagation rate of the W2 allows it to rotate about the surface of the W1 wave. If the time required for the
W1 wave to travel one wavelength is equivalent numerically to that of the particle mass of the electron, then
the W1 spin increment is equivalent to the electron mass number (9.10938291E-31). In effect, the number
can have units of either, seconds if we are discussing the W wave structure or kilograms if we are talking
UT
24
about mass content. If the W2 wave travels at 8.936E+16 m/s in the same time that the W1 wave travels at
the speed of light (3E+8 m/s), it produces a radius for the electron equivalent to the speed Cc divided by 2pi
times the mass/spin increment number. This simple relationship was the one used to see if any of the
measured values of any of the three matter particles fit the calculated values. It was with some surprise that
several of the values were related in exactly this manner—namely the ones that were important to
discovering just how the particles were constructed, as well as their sizes.
Figure 3.1
W3 waves propagate in circles to create tori that are seen as particles, but why is that? W3 waves
should travel along in straight lines at the speed of light, since the corkscrew of the W3 wave advances
linearly at that speed, so what makes the wave curve into a torus shape? So far, all the evidence we have is
that the charge must be calculated with the large torus radius. If there were an angle or offset associated
with the cross section wavelength that could be calculated and shown as a step value for creating the overall
hoop diameter then that would produce a specific diameter for each particle. The calculation for determining
the angle or offset would require a specific size for the W2 wave. The value could be determined by
UT
25
assuming the hypothesis is correct and working backward from there, but then it becomes just an exercise
and not a proof. Unfortunately, we do not have access to such small dimensions and cannot make a
determination at this time.
[The following descriptions are meant to represent a simplification of the mathematics involved in
determining particle characteristics. If you are interested in the math then you should use Chapter 12 as a
detailed guide to go along with these descriptions to make the most of them. If you are not interested in the
math then some of the statements below may not be clear, but this seemed to be the best way to present the
information.]
The magnetic moment and spin rely only on the W3 cross section, but the charge depends on the
overall particle size, which are different aspects of the torus curvature. It could well be that the interaction
of the magnetic and electrostatic portions of the particle construction are what produces the curvature, but
other evidence seems to disagree with that picture. When the construction of neutral particles is examined, it
is clear that the interaction of the two wave types seems unrelated to the curvature that creates the torus. The
real culprit is likely to be a string/brane interaction in the 2D world that we cannot directly observe.
Because we want to give a physical description to all of the electron’s characteristics, we must make
a case for how the charge is created. The full explanation will be given in Chapter 5, but for now the quick
explanation is that the W2 wave type responsible for charge is oriented so that it forms a circle that is
perpendicular to the torus of the particle. If we return to the torus and imagine that it is as much like a screw
as it is a series of discs, we can see that as it rotates at the velocity c then whatever is attached to it will be
pulled along with it. In this case it is the perpendicular charge wave, and so the resulting shape of the charge
becomes a sphere, which appears wrapped around the extremely skinny donut of the mass torus.
If the electron’s charge is created by a W2 wave identical to the one that creates the torus of Figure
3.1, there exists a parallel with how calculations are made with macroscopic objects. The radius of the
electron calculated earlier should work as the radius in the equations that determine charge in a rotating
loop, and they do. When the rotation rate is used to determine the current, it is the same as the specific
charge, which for the electron is 1.75882E+11 Coulombs per second. The specific charge normally has units
of Coulombs per kilogram, but here we are using the time unit to evaluate the wave properties. Using the
statement that mass and time are related, as indicated previously, current can be considered as a reasonable
substitute for specific charge.
Treating the electron in the same fashion as any other macroscopic charged loop is an important
proposal. We can now show that Coulomb’s force is so great for the simple reason that the waves are
progressing along the much larger overall particle radius at an 8.936E+16 m/s velocity. Not knowing the
size of the electron, nor the W2 wave speed, the generation of Coulomb’s force has remained a mystery to
particle physicists.
The value 1.6021765371E-19 J·s is called the electronvolt. Its symbol is eV, and is the energy
gained by an electron accelerated by a one volt potential. It is also the elementary charge in Coulombs for
both positive and negative particles, and this symbol is merely an italicized e. Because particles have a
physical size, one would think that each should have its own value equivalent to e. As it turns out, e is
associated with electrostatic and magnetic effects in the same way for all spin-1/2 particles, and it can be
UT
26
used in the same manner in equations that describe the electrical and magnetic properties for every spin-1/2
particle. Physical size plays a role in defining particle attributes and demonstrates why ultrawave equations
produce exact results for spin-1/2 particles. The electron volt is just one example of particle consistency.
The W4 torus gives the first look at the structure of particles and the reason for their spin-1/2
behavior. Anyone familiar with momentum creation knows that a torus can have a momentum generated by
the spin of the cross section, as well as the rotation of the torus as a whole. Particle spin, represented by the
Greek letter rho (), is the same for all matter particles and has to be calculated using the shape of a torus to
give the correct value. The equation for spin-1/2 particle momentum is simply one-half the mass times the
velocity times the radius. It is equivalent to what is seen when calculating the angular momentum of an
object that spins about its own axis, as does a torus, and Cc is the velocity value that must be used. The only
close physical approximation that can be given for picturing spin rotation is that of an air bubble ring in
water as it flows toward the surface. One difference between this example and a particle torus is that the
particle torus rotates either right or left as the spin takes place.
To clarify, it is not the rotation of the W4 wave creating the spin, but the rotation of the cross section
itself that is creating the indicated spin. For any spin-1/2 particle, the value of is 5.272875798E-35
kilogram meters squared per second (kg·m2/s). The radius associated with this cross section is the constant
labeled as the “natural unit of time” (Tn) in Table 1 of the appendix. It is necessary to substitute Cc for c in
the equation because we are dealing with a radial wave velocity and not a linear particle velocity. Only
equations that deal with momentum require using mass instead of time for the electron spin increment
number 9.10938215E-31, so for spin angular momentum we are dealing with the W2 wave momentum and
must use mass.
Many of you who are not intimately familiar with the constants and math used here may be
scratching your heads and saying, “I don’t understand how he can switch from using units of mass to units
of time, and back again?” Actually, what is being done is to separate two completely different types of
functions, matter creation and energy creation. A lot of it has to do with the fact that particles are physical
objects and not the elusive spheres of influence that have been presented to us by the SM. A particle’s spin
increment being equal to its mass number shows a fundamental misunderstanding that we have about the
nature of' time. In the end, ultrawave theory will indeed have made some changes to currently accepted
units for particles and atomic nuclei in relation to this time issue. This will not affect macroscopic units used
in everyday engineering use, but instead will be a redefinition of those used in the subatomic realm. The
resultant changes are presented in Chapter 10. If this were not done, only the currently used ideas
concerning particles, which in many cases make absolutely no sense at all, would have to remain in effect.
Defying the simplicity of common sense goes against our very nature and is probably the source of
much of the distrust of science by the average person who is not well versed in either the strange ideas or
the complex math needed to make the current system work. Ultrawaves bring back a common sense view to
the atomic world, as well as simplifying our understanding of matter. The mathematics is also greatly
simplified. The very fact that it is a common sense approach that is used by UT should be considered as
another indicator of its correctness.
UT
27
Returning to particle definition, other attributes of spin-1/2 particles that can be calculated besides
spin angular momentum are magnetic moment and a particular magneton value that is associated with each
mass value. Knowing that more than one radius is important in understanding particle construction, it is
possible to calculate the electron magneton, normally called the Bohr magneton (B), and the electron
magnetic moment (e). An italicized Greek letter mu () is used for both magnetons and magnetic moments,
since the magneton is the expected moment and the other is the actual measured moment. The formulas for
calculating the two values are then identical, and are taken from the equations describing electromagnetic
rings. Again, it is necessary to substitute Cc for c in the equations since we are dealing with ultrawaves.
The equation that determines the radius using the Bohr magneton gives the result for the radius as
1.288E-21meters, but only if we use a velocity value equal to the square of the speed of light (SI units only).
Without truncating decimal places, this is the exact value of the natural unit of time constant (Tn). Since the
actual velocity is slightly off from this number, the true value of the spin radius is 1.2955E-21m. The
magneton, which is supposed to be a unit of magnetic force, appears to be directly associated with the spin
motion of the W2 wave as it rotates about the W3 disc, and that means that the W2 wave has a connection
to magnetism, as well as to spinning mass.
Under UT principles, magnetic moments can be interpreted as the ability of the particle torus to be
twisted when exposed to an external magnetic field. The electron has a larger torus cross section than a
proton, whose size will be examined shortly; therefore a magnetic field will cause an electron to turn more
easily than any other particle. This is somewhat comparable to centrifugal or centripetal force in the
comparison between two gyroscopes of different sizes and masses, except that mass has no bearing on
magnetic moment values, only spin size does. This behavior manifests itself as a trend toward decreasing
magnetic moments for progressively heavier, and intrinsically smaller cross-sectioned particles, but there
are still large swings in the curve, which will be explained in a later chapter.
When electrons are measured, they have an actual magnetic moment value e that is slightly greater
than the idealized magneton value. The radius producing this value must also be slightly greater. The ratio
of the magnetic moment divided by the magneton is given the name magnetic moment anomaly. As it
happens, the small band of 1.502E-24 of a meter in width between the magnetic moment radius and the
magneton (spin) radius for the electron, as calculated by UT, represents the discrepancy between the perfect
electron spin and what is actually measured. When the ratio of magneton to magnetic moment for the proton
is calculated, the ratio is much greater, but the difference in radius between them is only 1.265E-24 of a
meter. If you compare the radii differential between the magneton and magnetic moment of the electron
with the radii differential of the proton, they are actually quite close in size.
The Standard Model view is quite different for magnetic moment anomalies, as it supplies no
physical extent to the electron, only a fuzzy one for the proton, and sees only the anomaly ratios. For the
electron the anomaly is 1.00115965218111, and for the proton it is 2.792847356. No wonder physicists
expect the two particles to be so different. When the size differences and magnetic moment differences for
all particles are compared, the actual radius values are always much different from the anomaly values and
the two cannot be compared with equanimity.
UT
28
It is not yet clear exactly what is responsible for producing magnetic force other than its association
with spin and the alignment of the plane of the particle torus to magnetic field lines. The existence of a third
wave might be the answer, bringing the total number of waves linked together to three. The fact that we live
in a three-dimensional universe may indicate that there is a requirement for three waves, one for each
dimension, although it is certainly not a necessity. Later in the book, there will be speculation on how
magnetism is created and why it is related to electricity. For now, we need merely look at magnetic
moments as the force needed to move a particle’s torus to be in line with magnetic field lines with more
lines producing more units of force.
Returning to particle sizing, if the same analogy is used that shows that an electron with a cross
section equivalent to the size of a dime (.7 inches) would have an overall diameter equal to approximately
110.5 miles, a similar comparison shows that an ideally formed proton of the same scale has a cross section
of .00038 of an inch and an overall diameter of about 202,841 miles. That’s equivalent to an orbit close to
halfway to the Moon compared to something so small you can’t even see it. These values indicate a direct
physical relationship the same as the ratio of their masses, or that the diameter of the proton is
approximately 1836 times larger than the electron’s, while its cross section is 1836 times smaller than the
electron’s. The true size of the proton calculated from the magnetic moment anomaly makes them only 658
times larger and smaller respectively, or about a factor of three less.
Particle charge is connected to overall torus size, but magnetic moments are connected only to
particle cross sections. The ratio of the two radii that define these two sizes is identified by the term r*. Two
forms of the ratio exist; rix* for the ideal particle size and rx* for the actual size, with x being replaced with
any spin-1/2 particle’s identifier. The difference between ideal and actual will be explained in a later
chapter; for now you need only know that it relates to magnetic moments.
When comparing rip* and rp* for the proton with rie* and re* for the electron, the differences are
substantial. As of now, most scientists believe that the larger the mass of a particle, the smaller it will be
because of the reduction in its interaction cross section. In actuality, the torus shape is smaller in the cross
section, but by the same token the overall torus diameter is larger. Why shouldn’t the idea that more mass
means a larger diameter also apply to the microscopic world the same as in the macroscopic world? UT
calculations show that when taking into account actual particle size based on magnetic moment, heavier
particles are usually larger, even though there are steps along the way where some smaller particles are
larger than heavier ones.
Although the SM does not care about mass in determining particle characteristics, when looking at
the situation from an ultrawave perspective neutrons should be similar to protons since they are so close in
mass. Protons and electrons are completely stable, showing no signs of degrading into other particles or
energy over any time period. Neutrons, on the other hand, have a half-life of approximately fourteen
minutes, forty-nine seconds. Fifteen minutes is an eternity when compared to the life of other particles
created in accelerators over the last several decades, but it is extremely short when compared to the electron
and proton. If it is proven that the neutron is composed similarly to the proton and electron and its instability
can be explained, then according to quantum mechanics all the spin-1/2 particles that make up our universe
have been described. Assuming, of course, that we ignore all mention of quarks.
UT
29
The neutron is only slightly larger than the proton, its mass ratio being 1.0013784187 times that of
the proton, but its magnetic moment anomaly is only 1.91567969, which is considerably smaller than the
proton’s 2.792847356. Actually, it means that the neutron is a little closer to a perfect torus shape than the
proton is which would lead one to think that it should be more stable, but alas it is not. The following table
shows both the ideal and actual r* values for all three matter particles.
rix*
rx*
Electron
1.000000000E+07
9.976847238E+06
Proton
3.371456634E+13
4.322382321E+12
Neutron
3.380757606E+13
9.212304661E+12
Could the answer to the neutron’s instability, lie with its rn* value? Possibly; after all rin* is almost
3.67 times larger than rn*, but the proton’s rin* value is 7.8 times its rn* value, so it is doubtful that this
value has anything to do with it. Alternatively, its instability may be due to the fact that it can break down
into three completely stable components, the electron, proton, and antineutrino. It is too early to tell if this is
the reason, for it probably depends on the structure of the W waves, which cannot be easily defined. Yet
another possibility is that the hidden charge is responsible for the instability. Neutral particles have charge
waves that are shielded, and that construction has features that cannot be determined, since it has no
externally measurable components. Even though the reason for the neutron’s instability cannot be
determined at present, at least there are several good possibilities to consider without resorting to nonphysical explanations, such as any of those of the SM.
Standard Model physics professes a big difference between the electron and proton, such as the
positive versus negative charge, a much larger proton mass with a different mass concentration, plus a
completely different construction. With UT, the exact same equations that work for the electron also work
for the proton and neutron. Building particles from ultrawaves is such a simple exercise that even high
school students can be taught how to do it. Because all of the UT calculated values match the accepted ones
exactly for any spin-1/2 particle you can name, it is next to impossible to ignore ultrawave theory. No
coincidence imaginable should be able to cause this perfect alignment with measured values and varying
styles of equations while still being able to define all of the different particle types in exactly the same
manner. UT shows conclusively that all matter is created equal.
Only three things were changed from current theory to allow UT to work: the substitution of one
type of energy for another, giving the mass number a time unit to show it as a spin increment when dealing
with the physical size of particles, and finally, assuming that all spin-1/2 particles must be built the same
way. You could say it was four things, since 2D ultrawaves had to be created to make all of this possible,
but the reverse would also be true; the introduction of 2D ultrawaves allowed the other three items to be
become possible. While the information presented in this chapter should be enough to convince anyone that
there is something to the ultrawave idea, there is still a lot more evidence to come.
UT
30
Chapter 4
Particle Mass, Planck’s Constant, and the Neutrino
“Both matter and radiation possesses remarkable duality of character, as they
sometimes exhibit the properties of waves, at other times those of particles. Now it is
obvious that a thing cannot be a form of wave motion and composed of particles at
the same time - the two concepts are too different.” Werner Heisenberg
It is generally accepted by quantum theorists that Heisenberg was wrong when he made this statement. They
believe that particles are both waves and solid particles simultaneously. Little did Heisenberg know that he
was correct in that matter and radiation are actually only made of waves. Although, it is likely that he
thought that they were only particles that somehow had a wavelike character. The one thing that neither he
nor the quantum theorists to follow would have expected to learn was that these waves are also inherently
mass carrying entities.
How is mass determined? First, let’s look at how it is done in the Standard Model. This should give
you a feel for how complicated it is by the current definitions. The following quote came directly from
Wikipedia, which contains somewhat of a layman’s guide to the physical sciences. Looking up the subject
yourself will allow you to follow the website’s links for further comprehension. All you have to do is go to
www.wikipedia.com and type in Higgs boson.
“The Higgs boson is a hypothetical massive scalar elementary particle predicted to exist by the Standard
Model of particle physics. It is the only Standard Model particle not yet observed, but plays a key role in
explaining the origins of the mass of other elementary particles, in particular the difference between the
massless photon and the very heavy W and Z bosons. Elementary particle masses, and the differences
between electromagnetism (caused by the photon) and the weak force (caused by the W and Z bosons),
are critical to many aspects of the structure of microscopic (and hence macroscopic) matter; thus if it
exists, the Higgs boson has an enormous effect on the world around us.
As of 2006, no experiment has directly detected the existence of the Higgs boson, but there is some
indirect evidence for it. The Higgs boson was first theorized in 1964 by the British physicist Peter Higgs,
working from the ideas of Philip Anderson, and independently by others.
The particle called (the) Higgs boson is in fact the quantum of one of the components of a Higgs field. In
empty space, the Higgs field acquires a non-zero value, which permeates every place in the universe at all
times. This vacuum expectation value (VEV) of the Higgs field is constant and equal to 246 GeV. The
existence of this non-zero VEV plays a fundamental role: it gives mass to every elementary particle,
including to the Higgs boson itself. In particular, the acquisition of a non-zero VEV spontaneously breaks
the electroweak gauge symmetry, a phenomenon known as the Higgs mechanism. This is the only known
mechanism capable of giving mass to the gauge bosons that is also compatible with gauge theories.
In the Standard Model, the Higgs field consists of two neutral and two charged component fields. One of
the neutral and the charged component fields are Goldstone bosons, which are massless and unphysical.
They become the longitudinal third-polarization components of the massive W and Z bosons. The quantum
of the remaining neutral component corresponds to the massive Higgs boson. Since the Higgs field is a
scalar field, the Higgs boson has spin zero. This means that this particle has no intrinsic angular
momentum and that a collection of Higgs bosons satisfies the Bose-Einstein statistics.
The Standard Model does not predict the value of the Higgs boson mass. It has been argued that if the
mass of the Higgs boson lies between about 130 and 190 GeV, then the Standard Model can be valid at
UT
31
energy scales all the way up to the Planck scale (1016 TeV). However, most theorists expect new physics
beyond the Standard Model to emerge at the TeV-scale, based on some unsatisfactory properties of the
Standard Model. The highest possible mass scale allowed for the Higgs boson (or some other electroweak
symmetry breaking mechanism) is around one TeV; beyond this point, the Standard Model becomes
inconsistent without such a mechanism because unitarity is violated in certain scattering processes. Many
models of Supersymmetry predict that the lightest Higgs boson (of several) will have a mass only slightly
above the current experimental limits, at around 120 GeV or less.”
Does that sound simple by any stretch of the imagination? Certainly not; and why should it, with
multiple fields and symmetry breaking, as well as the many different bosons involved. No wonder it is so
difficult for scientists to explain mass to the average person; mass must be created with several layers of
ideas that are completely removed from our everyday experiences. It seems so simple to just look at
something and say to yourself, “it has substance and substance equals mass.” Guess what, that is no less
correct than any portion of the Standard Model view.
When manipulating the particle constants using ultrawave principles, one thing stands out; all
particles have the same surface area. It makes sense, because the ideal sizes of particle components are in
direct relation to their mass ratios. The value of every spin-1/2 particle’s surface area is 6.626E-34 square
meters. The Standard Model calls this number Planck’s constant, h, and gives it units of joule-seconds, as it
is associated with energy in the Standard Model. It was derived by Planck to explain the black body
radiation that is produced when objects are removed from external light sources and then give up energy as
infrared radiation. The units for this radiation must match those of Einstein’s equation E=mc2. From a UT
perspective, adding energy in Planck constant units is equivalent to adding surface area units to any
particle’s torus, especially the electron’s torus. We must give this number a different designation AT, for
particle surface area, since it is only numerically equal to Planck’s constant.
It is easy to explain how Planck units of “energy” are added to electrons. Calculations show that one
Planck unit is equal to the surface area of one electron. Since the addition or subtraction is measured in halfPlanck units, it is only a half-Planck unit that is added to the radius of the particle. This is the same as
adding another whole unit to the diameter of the particle. The small r radius has to shrink to compensate for
the increase of the R radius, because they must always produce a balanced surface area. What this addition
implies is a growth in the circumference of the electron. Since the energy equivalent of h is equated only to
that of the electron or at least a multiple of it then it is proof that only the electron is involved in black body
radiation. Other components of atoms involved in radiation would give energy values that are much higher
than the electron’s energy equivalence and should be easily identifiable.
It is assumed that both the overall torus radius, R, and the small cross-section radius, r, change size,
but that has not been proven. It could well be that the spin of the electron increases, so that the r radius
increases but not the R radius. This kind of change should be identifiable with the right experiment. Also, if
the R radius changed but not the r radius then that would be just as identifiable, as each would present
differing values with the addition of each photon.
When dealing with energy as units of h, that also can be considered units of area AT, the equivalence
leads to a conclusion for how mass is created. The idea is that the W2 wave resembles the monolithic
(cosmic) string of superstring theory and the explanation for mass depends on that possibility. Because the
W2 wave creates the cross section of the torus while remaining within just the two-dimensional edge, it is
UT
32
clear that the length of the W2 wave changes to fit the surface area. It can then be shown that the amount of
wave (i.e. length) contained within the particle torus is what determines mass. What could be simpler?
Returning to the dime analogy given for the cross section of the electron, the mass would only be
contained within the ridges at the edge of the dime. The remainder of the dime can then be discarded, and
the combination of spinning and moving forward to create a torus shape leaves us with what amounts to a
coil with a miniscule cross section, but of enormous diameter. If you were to continuously wrap a small
diameter wire around the cross section of a hula hoop as tightly as you could until it had completely covered
the surface, then could somehow extract the hoop, it would closely resemble a spin-1/2 particle torus.
The small torus cross section of the electron means that the frequency increment is so small that the
length of the W2 wave within the hoop cross section is almost exactly equal to its circumference. The total
length of W2 wave containing the mass can be determined mathematically by determining how long the
wave takes to make one rotation (2piR/c) and multiplying that by the velocity of the W2 wave (Cc).
Performing the calculations gives a length for the W2 wave of an electron of 2.43194E-05 meters. Every
spin-1/2 particle is calculated in the same manner, where R is the idealized radius of whatever particle is
being examined. Once you have the total length, you need only divide the particle mass by its W2 length to
get the mass per unit length. This turns out to be 3.745725E-26 of a kilogram per meter, which is good for
any type or configuration of W2 wave.
The above scenario appears to be how the mass of a particle is created, but it is not yet a definitive
explanation. A definitive explanation would require the determination of what mass really is; whether it is
related to the mere existence of the ultrawaves, their movement within our 3D space, to their relationship
with time, or something completely unexpected. However, the above method of mass creation conforms to
the characteristics of other string waves, such as cosmic strings, so in that respect it is definitive. From this
statement a question arises about string waves in general; if particles other than matter creating ones are
made of string waves then do they have mass? The answer is yes, they all do.
One of the best particles to use as an example for having mass, which is supposed to be massless, is
the neutrino. Wolfgang Pauli dreamed up the neutrino in 1930 when it was noticed that some energy was
missing in certain atomic decay processes. He proposed that it must be a neutral particle, since it did not
seem to interact with other matter particles. The easiest decay process to examine that produces an
antineutrino—which is just like a neutrino except that it spins in the opposite direction—is that of the
neutron. When a neutron decays, it produces a proton, an electron and an antineutrino. The antineutrino
should contain the remainder of the W2 wave not accounted for by the proton and electron. This amount of
leftover W2 wave has a mass value of 1.39464E-30 kg. It is enough to produce about two and a half
electrons, making the neutrino or antineutrino relatively light weight, and the only particle between the
electron and the muon. It was long suggested that neutrinos had mass, but there was never any firm proof.
Experiments based on speed of light motion showed that it must be exceedingly small and far from the
value just mentioned.
From a UT perspective, the problem with mass determination does not lie with the neutrino or
antineutrino but with the interpretation and understanding of mass versus particle motion. The only reason
scientists think that neutrinos don’t have mass is because they travel at the speed of light, which according
UT
33
to Einstein’s relativity, precludes particles from having mass. We have discovered that this is a
misunderstanding, and that there is a difference between particle kinetic energy induced momentum and W2
wave velocity momentum. All particles travel at light speed, some do it in circles and appear massive, while
others do it in a straight line and don’t; we must acknowledge that the type of motion is the only difference.
When neutrinos and antineutrinos are created, they always travel at about the speed of light through
space as opposed to being stationary. What this means is that they are not particles in the same sense as the
proton, electron and neutron. The mass of a neutrino cannot be contained within a torus like the normal
matter particles we have looked at so far. The mass difference between the neutron and the sum of the
electron and proton must be released in some other form or configuration to create the antineutrino and
allow it to travel in a straight line at light speed.
There are actually three different types of neutrinos; the electron neutrino, the muon neutrino, and
the tau neutrino. [Details about the muon and tau particles will be given in a later chapter.] Which one
manifests itself in particle collisions depends on the amount of energy present in each type of decay process
that produces them, which translates to mass content. All three neutrino types must be created in a similar
manner, and it is likely that similar decay processes are involved. Recent experiments suggest that neutrinos
can change from one type into another, which is impossible from an ultrawave perspective, as the amount of
energy in the form of mass would have to change.
One particle feature that may determine the neutrino’s structure is momentum. Do neutrinos impart
momentum when they interact with other particles at rest? QT says that they do by increasing the mass of
atomic nuclei with which they are interacting. There is even experimental evidence proving it. Even to those
scientifically ignorant about the subject, this would certainly make one suspect that neutrinos or other
supposedly massless particles have mass. It is especially pertinent when viewed from the UT perspective
that W2 waves follow classical principles and hence would always contain a certain mass, even if it doesn’t
show that mass when in motion. It is much easier to believe that mass can appear in different forms rather
than believing that energy magically changes into mass.
We know the electron neutrino or antineutrino’s rest mass, so the neutrino should impart momentum
that is dependent on this mass. If it imparted the momentum as if it were a linear traveling spherical particle,
the maximum transferable momentum would be 4.181E-22 kilogram meters per second (kg·m/s).
Conversely, if the neutrino contains the W2 wave in the same W3 configuration as other particles, then it is
an angular momentum produced by the Cc wave speed. If indeed the angular momentum is created just like
spin-1/2 particles, it would have the same 5.272858E-35 kg·m2/s spin value. In this case, the radius required
to produce the theoretical magneton value associated with the supposed neutrino equivalent mass is
8.452264E-22 m. Experimental data may already exist proving whether either of these proposals is correct.
Linear momentum depends merely on the amount of mass present; therefore rotational changes in
the configuration of a neutrino would not affect its momentum transfer when applied linearly. If however,
the spin is created the same as other spin-1/2 particles, but with a different or varying radius of gyration,
then the angular momentum will increase or decrease since it depends on the equation ρ = 1/2mvr, where
momentum increases as r increases, and decreases as r decreases. From this perspective, an electron
neutrino could simulate a muon or tau neutrino, as recent experiments suggest with a change in r, providing
UT
34
a physical momentum transfer only. However, it would still contain the same amount of W2 wave and could
only add a fixed mass during constructive/destructive interactions.
A key to determining neutrino construction is being able to determine the energy needed to produce
appropriate particle sizes through neutrino addition. There are either three different amounts of energy
detectable corresponding to the three neutrino types, or a whole range of values, which would indicate that
the entire picture of neutrino production is wrong. Atoms that release a neutrino to form a different isotope
will also do the same thing in reverse, which is how the elements used in neutrino detectors were considered
for use. Determination of the target atom’s construction can be used, along with an exact mass change
between the atomic isotopes produced, to determine the physical size of the neutrino’s spin, which will give
us a good idea of how it is constructed. Most likely, the current view that only three neutrino sizes exist is
accurate.
The only physical difference between neutrinos and matter creating particles may be the torus shape
of the fermions (fermion is another name for a spin-1/2 matter creating particle). If the spin cross section
remains the same for a particular mass neutrino as it does for a fermion’s, but the W2 wave does not form a
torus for some reason, it should remain in a relatively straight configuration and will travel at the speed of
light roughly in a straight line. It is only roughly a straight-line action because the actual path may form a
curlicue about the ideal line. The W2 wave probably twists in a manner that skews it from forming a circle,
thereby propagating at c in a twisting path. This would resemble a precessing elliptical orbit if viewed
axially down the path of propagation. With the large disparity in velocities between the two wave types that
create all particles, the size of the spin cross section has a tremendous range of possibilities. It is imperative
to determine if there are only a few allowed sizes, which will clear up the neutrino construction problem.
While fermions can be accelerated within space to gain addition momentum, because they are
rotating about a fixed point in space and are relatively stationary, neutrinos already travel at full speed in a
straight line and cannot be accelerated in their direction of propagation. There is the possibility of
accelerating them in a direction perpendicular to their normal propagation, but that would be an extremely
difficult undertaking with no potential usefulness. Unless a good reason can be found for trying this type of
acceleration, there seems to be no need to prove or disprove the premise, especially when there is the
possibility that it happens naturally on a cosmic scale that may be detectable with the right experiment.
One consequence of the shape of a neutrino being a nearly linear tube with a cross section equal to a
hypothetical spin-1/2 particle with equivalent mass is that the cross section would be small when compared
with the charge sphere of spin-1/2 particles. Since the neutrino has no charge shell and little or no magnetic
interaction within its propagation direction, it would have little chance of interacting with or colliding with
any fermions, and especially with each other. Simple comparisons of spin cross-sectional areas show that
the ratio of an ideal electron cross section to that of an idealized neutrino cross section is 2.34935 times
greater for the electron, which is the reverse of the mass difference. The ratio of the neutrino cross section
created in the manner just described to the overall disc created by the electron is almost 2.4E+14. What this
means is that it would take a significant portion of this number of neutrinos placed edge to edge and fired
directly toward an electron to attempt a collision between the two, but even this large a number could not
ensure a destructive collision only a probable one.
UT
35
It has not yet been determined if the intersection of two cross-sections will cause a destructive
collision. It may be that it takes just the right orientation and overlap to accomplish an interaction, whether
constructive or destructive. Still, some boundary conditions for interactions must exist. For discussion
purposes, the assumption will be made that any particle cross sections that overlap fully will result in a
constructive or destructive event. What can’t be determined yet is whether the cross sections need to be
exactly the same size, within a particular size range, or a multiple of a particular size to interact.
Depending on how a neutrino could hit an electron, there might be a difference in the amount of area
available to hit. Since the electron can be turned with either its mass torus oriented in the same plane as the
neutrino propagation or perpendicular to it, if the electron is turned parallel to the propagation then the
number of hits will be less. The area ratio is exactly equal to pi, meaning that there should be three times as
many hits with the orientation perpendicular to the neutrino propagation. Although it could be difficult to
implement, this is the simplest test to prove or disprove particular neutrino construction techniques. If there
is no difference in the number of hits between the two orientations, then either the construction of the
neutrino is nothing at all like that of a spin-1/2 particle’s tube-like body, or interactions occur across full
particle diameters and not just spin cross sections. Even though it seems likely that neutrinos have spins
similar to matter creating particles, we will not know until many experiments have been performed with the
intention of narrowing down specific construction possibilities.
A test that is similar to the orienting of electrons by a magnetic field can be conducted on neutrinos,
assuming they have any magnetic interaction ability. If they are allowed to pass through a strong magnetic
field prior to interaction with a detection medium, then any displacement should become apparent. This is a
good test for determining whether the spin radius is also the same radius used to create or interact with
magnetic field lines. Even the interaction strength with the magnetic field should be determinable so that the
assumption of the shape of the neutrino tube can be checked. Current efforts to detect the magnetic moment
of a neutrino use assumptions based on SM physics and will not give the desired results. The reason is most
likely the neutrino shape. Simply stated, it is only a radial twist of the cylinder that occurs when interacting
with a magnetic field. The neutrino has no orientable rotation in its propagation direction and will always
give a value of zero.
One last possibility for why the magnetic effects of spin-1/2 particles do not apply to the neutrinos
may also lie in their physical construction. In effect, there may be a cancellation of spins within the
neutrinos, which produces counteracting magnetic effects, so that the neutrino has absolutely no external
magnetic moment. There is nothing to gage it against so we cannot see any change. It is nearly impossible
to determine with certainty how to make a neutrino in this manner. Only more standard constructions such
as those of spin-1/2 particles that have measureable spins are easily determinable.
Experiments give the impression that neutrinos must have small cross sections, since their
interactions with matter are so rare. They also behave in ways that do not fit with what we know about
normal matter creating particles. It is difficult to put a physical size and shape to the neutrino with so little
evidence to go on, but what evidence that does exist easily fits with normal UT mass wave behavior.
Ultrawave equations put the size of the electron neutrino cross-section radius at 8.452E-22 of a meter—
UT
36
assuming that the cross-section size is calculated the same way as a spin-1/2 particle’s cross section. It will
be interesting to see if more refined experiments can determine if this proposed size is correct.
One last possibility for the appearance of different neutrino types is simple geometry. As stated
earlier, the orientation of matter may be relevant to how many interactions are detected. Just by randomly
orienting matter you have three general directions in which each atom or molecule will be oriented. If the
calculations for neutrino interactions are not based on a model that takes this into consideration then the
number of interactions will be a third of what is expected. Since this number is what is seen, it could be
considered an indication that matter orientation does play a role in neutrino interactions. If matter can be
oriented in a specific direction before interactions are measured, the effects of orientation should be easily
identifiable.
Our understanding of neutrinos is inadequate as proposed by the SM, and its answer of switching
between types is incorrect from a UT perspective. The SM view of what neutrinos are and how they are
generated is lacking a cohesiveness that prevents an accurate prediction of what should happen in current
experiments, such as those at Kamiokande. Just because switching between electron, muon and tau
neutrinos solves the detection problem doesn’t mean that it has any validity. The SM wasn’t right in
suggesting that they change types, it was just the most simple and logical assumption that could be made.
Physicists should at least consider the possibility that our picture of neutrinos needs to change to include
new proposals that explain the measured neutrino quantities in other logical ways, such as the one proposed
by UT.
UT
37
Chapter 5
The Connection between Particle Charge and Antimatter Status
“I must say that I also do not like indeterminism. I have to accept it because it is
certainly the best that we can do with our present knowledge. One can always hope
that there will be future developments which will lead to a drastically different theory
from the present quantum mechanics and for which there may be a partial return of
determinism. However, so long as one keeps to the present formalism, one has to
have this indeterminism.” Paul Dirac (Speaking on Einstein’s and other’s concerns
about the quantum)
An important aspect of a neutron’s behavior needs explaining; it is the need for it to have either no
electrostatic charge, or an internally contained charge. Resolution of this question is necessary to complete
the picture of matter creating particles. A subject that may help in this quest for understanding neutrons, but
which has essentially been ignored, is the cause of electrostatic attraction between particles. Particle charge
involves a new concept, decoupled and partially decoupled waves. Their existence, which was hinted at in
Chapter 3, must be considered as part of an ultrawave family of waves in order to explain how charge is
created.
Fully decoupled ultrawaves are designated as W2o waves, and semi-decoupled ultrawaves
responsible for electrostatic charge are designated as W2s waves. The W1o and W2o waves are the ones
that create all of spacetime. It is not clear if there is a need for a W1s wave, so we will be ignoring that
possibility for now. The W2s charge wave is not bound within the torus of a particle and is allowed to move
both inside and outside the torus. If you will recall, to calculate the specific charge of a particle requires
using the radius that delineates the overall torus size. Electrostatic charge for a standard charged particle can
then be described as a surface phenomenon created by the W2s wave as it transcribes a sphere about the
diameter of the torus.
Attraction between particles is most likely due to the opposite direction of rotation of the W2s waves
around the torus, whose direction of rotation is determined by the direction of rotation of the W2 wave
creating the torus. In other words, one particle has a right-handed torus spin and the other a left-handed
torus spin. Cloud chambers show the effect of spin direction as the movement of particles in spirals when in
the presence of a magnetic field, with opposite charges moving in directions opposite each other. The
particle torus guides the W2s wave from one side to the other thus forming a circle that is perpendicular to
the torus. As the circular W2s wave gets pulled around the particle hoop, it makes a sphere with poles that
are opposite the main torus diameter. Except for the difference in size, the proton and electron are similarly
constructed, and if large enough to be visible would look somewhat like Figure 5.1. To show the full
number of times that the W2s charge wave travels around the torus, the charge sphere appears as a
transparent solid.
UT
38
Figure 5.1
It is not clear if neutral particles have this or any type of charge shell; having no charge shell would
seem to be a good bet. It would explain why neutrons show no signs of attracting either protons or electrons
but still have values like a specific charge. Of course, just because the specific charge can be calculated
doesn’t mean that it actually exists. A neutron must have something like an electric charge, or else it would
not combine with particles like the proton. Defining the full construction of the neutron, as well as other
matter particles, is the key to understanding what charge is and how it works. From the UT perspective the
most basic construction for a neutron is of a simple torus without the surrounding charge sphere.
A characteristic feature of the W2s wave that creates charge is that it seems to be controlled by the
particle torus, and takes its cues from what the torus does, such as its perpendicular orientation to the torus.
Because electrons and protons curl in opposite directions within a magnetic field, it is a good bet that the
direction of propagation for their charge waves is opposite. Nothing would make this opposite behavior
UT
39
occur better than an opposite rotation of the particle tori. Neutrons on the other hand, while also orienting
themselves to the magnetic field do not curl around like the electron and proton; therefore it can be deduced
that W2s charge waves alone are responsible for the curling motion. A small possibility exists that there are
two oppositely propagating waves that create a sphere around the torus of the neutron, which would also
negate the appearance of an electrostatic charge; however this would complicate things considerably, and is
not yet necessary for consideration. It is more likely that the charge wave exists but is hidden.
It seems that charge is related to the physical description of particles, so to fully understand the
physical construction of matter, it is important to understand the construction of antimatter. It is well known
that matter and antimatter particles annihilate when they meet, as long as the particles are alike but
oppositely charged, which is the reason for their opposite nomenclatures. For example, a proton has a twin
that is identical except that it is oppositely charged and is called the antiproton, and the electron has one
called the positron. Only identical mass but oppositely charged particles will annihilate each other.
The simplest way to describe the difference between matter and antimatter using UT principles is
through the opposite propagation of the waves within the particle torus, as well as those of the charge
waves. Opposite rotation of the torus and charge waves between particles subjects all of the waves from
both particles to head-on collisions when identical sized tori overlap. Tori collisions require mechanisms for
forcing the tori to overlap, since the sizes of the particles are so small that randomly orienting to the correct
position in the correct location would be extremely remote. For neutral particles, the same orientation and
locating of each particle’s torus must also take place; therefore, the properties that cause torus overlap must
apply to all particle types. Although electrostatic properties bring particles together, it is the magnetic
properties that must be the control mechanism for the positioning of the tori.
Creation of both neutral and charged particles, as well as anti-particles is a relatively straightforward
process. We need only look at the possible configurations using the ideas already proven for how particles
are generated from the W waves. The shape of charged particles has been determined with a fair degree of
certainty, as depicted in Figure 5.1. Wave propagation direction within the torus has been indeterminate up
to this point. A W2 wave can propagate around the particle torus either right-handed (clockwise) or lefthanded (counterclockwise), which means that the W2s charge wave can propagate around the torus in four
ways, making eight different configurations for particles.
Using just the four best options, the representations depicted in Figure 5.2 show all the different
charged and uncharged particles that can be produced by the combinations of all the W waves. The small
curlicue with arrowhead indicates the direction of propagation of the W2 wave within the torus, and the
large curved arrow indicates the direction of charge propagation for the W2s wave. The bottom two
representations of Figure 5.2 show configurations based on the possibility that the W2s wave propagates
along the particle torus and does not produce a charge sphere.
UT
40
Figure 5.2
Because the macroscopic and microscopic worlds seem to follow similar behaviors, the magnetic
right hand grip rule transferred to the microscopic world prevents some possibilities from existing.
Applying the rule requires having the right hand in a thumbs-up position with the fingers curled into a fist
and the thumb sticking upward. This is the usual hand signal that indicates a yes, go, or OK. For
macroscopic matter, if the curled fingers indicate the direction of rotation of the charge then the thumb
follows the direction of the magnetic field produced. The opposite is true of particles, so that the W2s wave
follows the thumb direction while the magnetic effects are due to the motion signified by the curling fingers.
Assuming there was a left-hand rule for antimatter, only the four versions that best fit both the right
and left hand rules were shown in Figure 5.2. The only two examples that follow the pure forms of the right
hand and left hand rules are the bottom two. What is noteworthy about these bottom two possibilities is that
since the W2s charge wave follows the same path as the torus, in this configuration the charge wave would
be shielded. This would make it a near certainty that the two bottom examples belong to the family of
neutral particles and antiparticles. From the perspective of stability, this would seem to be the most
favorable shape, but either the size or the speed of the W2s wave makes the neutron too unstable to last for
UT
41
very long. It is also possible that because they are in the same plane an interaction between the magnetic
effect of the spin cross section and the charge effect of the W2s is to blame for the instability.
To fully define the spins and orientations, it is necessary to pick a reference frame. In Figure 5.3, the
presence of a magnetic field is necessary to determine vertical orientation. Both particles need to be viewed
from the same vantage point outside the fixed field, and although these two figures are perspective views,
the descriptions require placing the magnetic field lines at 90° to the torus plane. The north magnetic pole
was chosen as the orientation that makes the electron right-side up, making the proton upside down. For the
electron, the charge wave travels downward from right to left and the torus rotates clockwise. Conversely,
the proton’s charge wave travels upward and the torus rotates counterclockwise.
Figure 5.3
UT
42
The examples in Figure 5.3 depict all charged particles, as well as their antiparticles, and
demonstrate how charged matter and charged antimatter can be generated by just the top two configurations
of Figure 5.2. The assumption was made for which direction of spin to use with each particle type, as it
could easily be either one for each type. This is similar to the way the positive and negative charge was
arbitrarily identified in the first place. There is a correct answer, but it isn’t clear which it is just yet.
However, by using UT definitions it is possible to determine that particles are indeed similarly constructed,
and that it is the combination of spin direction of the torus and the other physical components of the particle
that makes charge creation possible.
Visual evidence of charge can be seen in the cloud chamber tracks that electrons and protons
produce. These tracks show positive proof of opposite charge wave propagation. By default, this implies
that these two particles must have opposite spin directions along their torus’s hoop section. Because
interactions between W2s charge waves and some other form of W wave are what create particle attraction,
and charge only appears when unshielded.
The size of the electron torus’s cross section is so much larger than the proton’s that the W2 waves
do not interact. This further leads us to the declaration that the proton and electron are antiparticles to each
other, and it is only their size difference that permits them to combine without annihilating. However, if the
proton and antiproton or the electron and positron are brought together so that their charge shells overlap
then magnetic forces will align the tori of the opposing particles. When this happens, the two W2 waves will
be aligned for a head-on collision. The released energy is then equivalent to the sum of the masses of the
particles times the Cc speed.
Published papers claim that an electron and a positron can be combined to create what has been
called an anti-atom. A first impression is that this is impossible, since the annihilation of both should be
expected. To get around this problem, the two could be brought together in the presence of a magnetic field,
which would keep them oriented in opposite directions. If the particles are oriented opposite to each other
then they will spin in the same direction. Once the external field is removed, the natural field of the two
particles should be strong enough to overcome the reverse orientation and they would then annihilate.
One thing that can be surmised about the neutron’s spin is that it must be the same as an electron’s,
making it an antiparticle to the proton as well. The reason we can be so confident of this proclamation is
that the decomposition of a neutron produces an antineutrino and not a neutrino, not to mention the fact that
it has a negative magnetic moment just like the electron. Having opposite spins explains how protons and
neutrons more readily attach to each other.
Standard Model physics indicates that the neutrino is left-handed and the antineutrino right-handed.
It was always assumed that each should have both spin directions, and that something must be wrong. UT
shows that there can only be one possible spin orientation per neutrino type if their construction is similar to
a linear traveling W3 wave. Therefore, the definitions of a neutrino and antineutrino depend solely on their
spin directions.
What is most important about defining spin is discovering that it is responsible not only for magnetic
moment and charge, but also antimatter status. It has been assumed that there should be some kind of
symmetry breaking that is needed to have only matter be created. What we have found with ultrawave
UT
43
constructed matter is that both types of matter already exist in great abundance. If there is some sort of
symmetry breaking, it is for the universe to have a preference for which spin direction to produce at each
mass level of particle construction; not that the universe prefers matter over antimatter, but that it prefers a
particular spin direction for each particle mass.
The Standard Model states that a slight mismatch occurred at the moment of creation of the
universe, so that one out of every billion or so particles was left after all annihilations had taken place
between matter and antimatter. This may have been a reasonable assumption if only a single particle was
considered, but not all three matter creating ones. Especially since we have discovered that they are not so
different after all. Instead, it makes more sense to believe that a slight rotation was introduced into the early
universe by the single fact that all W2 waves have curved paths. It certainly seems that it must be a
prerequisite condition for their creation. When we look out into space, we see stars that spin, galaxies that
spin, even gas clouds that spin. It does not seem too difficult to believe that there is some reason other than
chance for the existence of all these spin conditions without some form of initial rotation.
Figure 5.4
UT
44
Because the charge shell is produced so quickly by an ultrawave, the shape of particles should
appear solid and should be testable. Some of the electron micrographs of material surfaces hint at the shape
of atoms as being spherical. If they are spherical, and not just a torus shape, it must be due to the charge
shell being a sphere. If it turns out the shape is actually like a full donut, with or without a discernable hole,
then the charge must inhabit an area surrounding the mass torus, and in that case could contain a different
amount of W2s wave than previously considered. Figure 5.4 shows what a particle built with an external
charge shell would look like.
In Figure 5.4, a large red donut is used to represent the shape and extent of the charge shell. The
main problem with this type of charge shell is the difficulty in pinning down the size of the shell. A logical
assumption is that its radius is equal to the radius of the sphere surrounding the spin torus of Figure 5.1. In
essence, this will create a second torus around the original torus having the same radius, so that the final
size is three times the overall diameter of the original torus. It may well be that this construction applies to
one spin direction of particle, such as the electron, and that Figure 5.1 represents the other spin direction,
which would then apply to the proton.
It may be easier to imagine how charge works when using an external charge field like this, but the
indeterminate size issue prevents further pursuit of this idea. This particular construction should not be
removed from consideration; after all, it has more promise when dealing with subjects such as the creation
of photons, mass distribution, particle creation, and gravity. For now though, it seems prudent to follow up
with a “charge sphere equal to torus diameter” particle construction method. There are ways to check the
external charge field concept, but it will have to wait until other concepts are explored.
We are unfortunately getting into territory where there is not enough information to determine
definitively these remaining questions about particles. We will address some of these issues in chapters
beyond the interlude, whose information is more speculative in nature. Until now, the particle
characteristics that have been examined have either nearly indisputable mathematical proof, or convincing
arguments based on those proofs. It seems that we could still make some informed guesses about other
subatomic materials by making some inferences from what we already know about W1 and W2 waves. But
before we get into those kinds of suppositions, there is one more question that needs answered. Why is
matter only made of electrons, protons, and neutrons?
UT
45
Chapter 6
Why only Three Particles?
“String theory is an attempt at a deeper description of nature by thinking of an
elementary particle not as a little point but as a little loop of vibrating string.” Edward
Witten
Because spin-1/2 particles aren’t the only particles, ultrawave theory must contend with all of them if it is to
be considered the true description of reality. Currently, UT can only show mathematically how spin-1/2
particles operate, not how other spin types behave; therefore, it is necessary to stick to what can be proven
until the next chapter, where spin-1 will be addressed. Actually, the information may exist to accomplish the
task of defining other particle types, but it is not easily accessible, and the attempt has not yet been made
using UT principles. There is however, a lot more that can be determined about spin-1/2 particles.
“Why only three” in the title of this chapter is a little misleading. There are actually many spin-1/2
particles besides the electron, proton and neutron, but these three particles seem to be the only stable, or at
least semi-stable, ones. If it weren’t for particle stability then matter could not exist. The following proposal
may seem so revolutionary that nearly every particle physicist will scoff at it until they see the supporting
evidence. The proposal is this; many, if not most atomic nuclei are composed of particles other than the
proton and neutron. How could the Standard Model be wrong about something as basic as that?
Experimenters were fooled because the increments between nuclei are in quantities similar to those of
protons and neutrons. In any event, it must be proven beyond a reasonable doubt that more than two stable
matter creating particles exist to support making such a proposal.
To assist in solving the question of why no other stable particles have been found, we need to define
what it is about particles that make them alike. This seems like a simple task, but there are no clear items in
common between any two particles. However, there are two particles that are very much like the electron,
they are the muon and the tauon (tau), which is where the names of the neutrinos came from in Chapter 4. It
is clear that they are alike by their similar magnetic moment anomalies. Calculations suggest that the
electron and its two siblings are surprisingly close to perfect tori.
First, let’s look at the characteristics of the muon and tau particles. The muon has a half-life of
approximately 2E-6 seconds and the same charge as an electron, except that it is approximately 207 times
heavier. The tau particle is about 16.8 times heavier than a muon particle. Having similar magnetic moment
anomalies implies that the values for the magnetic moments are reversed proportionally from those of the
masses. The tau’s magnetic moment is about 16.8 times smaller than the muon’s, just as the muon’s is about
207 times smaller than the electron’s.
Through experimental evidence, it has been shown that a combination of two protons and a muon
can form a bond. Calculations made by the experimenters indicate that the muon separates the two protons
by a distance of about 5.1E-12 of a meter. Unfortunately, the experiment’s details were misplaced and have
not been found again, so they cannot be cited here. When UT is used to calculate the size of the muon based
UT
46
on its mass, a diameter of 5.3822E-12 of a meter is obtained. Assuming there was some substance behind
the number 5.1E-12, the UT-calculated size is close enough to suggest that the two protons interlock with
the edges of the muon to create an ion. Unless there is something different about how the electron combines
with two protons, then the size of the same ion made with an electron instead of a muon should have a
diameter of separation of 2.606E-14 meters. This corresponds to a direct relationship in size, as well as mass
of 207 times between the electron and muon. If a similar ion were made with the tau instead of a muon, then
the separation would be 9.052E-11 m.
To determine how particle combinations and atoms are created with UT, two particle combinations
need to be explained: the deuteron and helion. At least in SM theory, the deuteron can be created either by
pairing two protons under special conditions, or by pairing a proton and neutron. QT has no problem with
pairing protons as it creates particles from quarks anyway, but from a classical point of view it makes no
sense and must be viewed from a completely different perspective for UT purposes. In this case, what we
would do is create either a new particle, or a linked pair of two different ones.
Based on the UT processes used earlier to define particles, if the deuteron were a new particle, it
should conform to the qualifications for a spin-1/2 particle, just like all the others we have looked at so far.
The only problem is that the deuteron is a spin-1 rather than a spin-1/2 entity; therefore the spin radius
cannot be determined. In this return to a more classical style of physics, an entity is defined as being a single
fermionic particle only if it has spin of 1/2. To be a spin-1 entity, it would have to be composed of two spin1/2 particles; a spin 3/2 entity would be composed of three particles, and so forth for each additional 1/2
spin increment. From this new viewpoint, the deuteron would not be a particle in the normal sense but
instead must be a composite of two particle tori and an altered charge shell. The deuteron will be explained
in further detail in Chapter 9, where that information best fits within the book’s overall theme.
The helion is the nucleus of a helium-3 atom. It is supposed to be composed of two protons and a
neutron. If we assume that the helion is not a combination of particles but is a particle in its own right, then
the helion would have specific values that compare to other spin-1/2 particles. Its measured magnetic
moment is 1.074553E-26 J/T; therefore its spin radius would have to be 2.343733E-25 m. If the SM
construction was correct, the magnetic moment of the two neutrons would cancel and the proton’s magnetic
moment would be left, or else they would add and the result would be a negative moment, but that is not the
case. The moment is a positive one that can only be the result of a larger single particle. SM physics does
not care about positive and negative charge when dealing with magnetic moments, but it is a vital UT
consideration.
Nothing in the Standard Model states that nuclei cannot have any values for magnetic moments;
therefore, it makes perfect sense to apply a different reasoning to them now that it has become clear that the
measured values fit particle-like schemes. Since the UT calculated value for the helion easily fits the
measured value, it certainly appears as if it is a single spin-1/2 particle. In this case, the spin actually is onehalf, and the magnetic moment fits best with the particle scheme rather than the combination scheme. You
will see how this works when comparing mass versus magnetic moment in natural log plots.
Particles that have similar characteristics make them appear like lighter or heavier versions of each
other. These “similar” particles possess particular relationships between their masses, magnetons, and
UT
47
magnetic moments. The family of leptons (electron, muon and tau) provides a good example of these
relationships. These three particles show the closest approach to the ideal of the measured magnetic moment
compared to the calculated magneton value. It is this property of particles coming in sets of three that needs
to be explored, and evidence will be presented that shows many more such sets exist.
Two other particles that are significant in our quest for sets of three are the sigma+ and sigma0
particles, which have features in common with the proton and neutron, respectively. It is possible to make
use of these two pairs of particles to reveal the nature of particle relationships. Assuming that the sigma+
and sigma0 particles are the second in a series of three then the masses and magnetic moments of the third
members have to be related in a similar fashion to the other two known members. The relationship between
magnetons and magnetic moments are presented in the graph of Figure 6.1 of natural log plots of mass
versus magnetons and magnetic moments. These relationships are the result of plotting every spin-1/2
particle with a published magnetic moment, as well as plotting all of the spin-1/2 atomic nuclei.
Because the W2 wave velocity is numerically almost a power of the W1 wave velocity in the SI
system of units, the two waves can be related through what is called a power curve. This is important to the
families of three in that only entities that contain these wave relationships should fit the proposal. All
graphing information shown in Figure 6.1 is based on power curve information. Actual power curves are
unable to project forwards and backwards easily, so the natural logs of those values are used to produce
straight lines that are then projectable to reveal which particles and nuclei are related. Figure 6.1 gives the
proof about particle construction that was hidden without knowing about the ultrawave velocity, and hence
the power curve relationship.
Please note that every particle or atomic nucleus has a calculable magneton that falls on the exact
same magneton power curve—the magneton curve being an ideal spin curve. Magnetons use the assumption
that the relationship is truly a power one, such as that of c versus c2. The magneton curve is the long sloping
line extending from the upper left to the lower right in the graph of Figure 6.1. An angle of -45° is the actual
curve slope, but it was necessary to shrink the space between the horizontal values of the curve, hence that
direction is foreshortened. The upper left portion of the graph has been cut off, since the position of the
electron is too far away from the other plotted points. The first particle location on the graph at the upper
left position is the electron‘s closest cousin, the muon.
There are likely to be more power curves that fit at least some of the data points for the magnetic
moments than the ones shown in Figure 6.1, but only those that were easiest to fit were shown. The most
interesting fact is that out of the 80 spin-1/2 particles and nuclei shown there were 65 curves to which the
data points were able to be perfectly tuned. An adjustment of less than 1% was needed to accomplish this
feat, and in most cases it was one or two orders of magnitude less than 1%.
Because the lepton family members had magnetic moment values that fit a power curve, it implied
that the other two family types, positive and neutral, should have their own power curves, and the evidence
provided proves it. More importantly, an exact magnetic moment value is then specified for any given mass
value and vice versa, along the length of any particular curve. It is not difficult to determine whether any
particle appears on a power curve as long as its mass and magnetic moment are know and if all possible
power curves have been predetermined. All possible curves have yet to be defined.
UT
48
-58.0
-59.0
-60.0
-61.0
-62.0
-63.0
-64.0
-64.0
-63.0
-62.0
-61.0
-60.0
Figure 7.7
Figure 6.1
-59.0
-58.0
-57.0
-56.0
UT
49
Producing the initial magnetic moment power curves for the proton and neutron required slight
adjustments of the magnetic moments of the sigma+ and sigma0 particles, while leaving the proton and the
third object of its family, as well as the neutron and its third family member, almost unchanged. These third
family members are revealed as the carbon 13 nucleus for the positive group member, and the nitrogen 13
nucleus for the neutral group member. It is just too coincidental that these two members are from atoms that
are so largely abundant and important to life as we know it.
An interesting aspect of the positive and neutral group’s magnetic moment power curves is that they
cross the magneton curve at a point close to the muon magneton location. It is unclear if this is significant,
but the fact that similar events occur at all three lepton magneton locations, as well as at other spin-1/2
particle-occupied locations along the curve, suggests that it is. The magneton locations for the remainder of
the spin-1/2 particles were not plotted to see how many curves fell near them, but it would be interesting to
see just how many curves fit these types of patterns. As many as seven curves came extremely close to
projecting to the location of the muon, which seemed more than just coincidental, and pointed toward an as
yet unidentified relationship between the items on the curves.
You may have already noticed that two points on the graph lie underneath the magneton curve.
These two magnetic moment locations belong to the Xi- particle, a negative particle, and the Lambda 1115,
a neutral particle. They were included to show their unusual place in this collection. They are the only two
particles below the magneton curve. The magneton curve represents a hypothetical perfect form limit, so
nothing should be below the curve. Only a few things could cause such a condition. First, the cross sections
of the tori are smaller than they should be. Second, the tori do not connect at the ends because the mass is
too small. And lastly, some mistake caused the measurements of these two particles to be wrong. My
personal feeling is that the particle masses are smaller than needed to make a stable particle but the
magnetic moments are close enough to the ideal line that a resonance is observed very briefly before total
wave collapse. It was important to mention these two particles, because all of the information in this chapter
reveals clues as to why each particle exists as it does. If it is possible to figure out all of these types of
details then the full structure of particles and atoms can be completed.
If you are interested in creating your own graph, which will give you even greater access to the fine
details, Table 6A gives the natural log values for each particle and atomic nucleus in order of increasing
mass. If you desire, the natural log values can be converted back to their original values by entering the
numbers into a calculator and using the inversion key in conjunction with the ln key. Converting back to the
original values will show any deviation from the WebElements listings, or those from other sources where
no information exists on WebElements.com. The two biggest discrepancies that existed between the original
values and what is used on the chart belong to 11Be and 17N, neither of which was listed on
WebElements.com at the time of this writing, although at least one of them was listed sometime in the past.
Table 6A (values for Figure 6.1)
MASS
-69.1708329455196
-63.8392341847312
MAG. MOMENT
-53.0336674397089
-58.3652599391638
MAGNETON
-53.0348264200176
-58.3664251808060
ITEM
CHARGE
ELECTRON MUON
-
UT
-61.6554053050949
-61.6540278371357
-61.4833000000000
-61.4182664869700
-61.4155963705200
-61.4115076379600
-61.3190000000000
-61.3156500000000
-61.0839200000000
-61.0167696405327
-60.5587142412500
-60.5596300000000
-59.2622000000000
-59.2607871459400
-59.1015750000000
-59.0974000000000
-58.9545979592300
-58.9543000000000
-58.9571000000000
-58.8289450673148
-58.7183004903900
-58.7181175033400
-58.4437000000000
-58.2959000000000
-58.2959873346404
-58.2349100000000
-58.1947500000000
-57.8132419133629
-57.6171000000000
-57.4032800000000
-57.3481350000000
-57.3198906327088
-57.2691546119878
-57.1636000000000
-57.1528150988932
-57.0684682126497
-57.0285300000000
-57.0086800000000
-56.9872000000000
-56.9663700000000
-56.9599227000000
-56.9539888126146
-56.9361139729570
-56.9359900000000
-56.9361750000000
-56.9185640931017
-56.9185600000000
-56.9099000000000
-56.8920000000000
-56.8843165000000
-56.8512496481027
-56.8305680000000
-56.8197485000000
-56.8020120000000
-56.8035714938928
50
-59.5231925225629
-59.9015591306593
-61.0387000000000
-59.6511700000000
-60.0677000000000
-60.4004600000000
-60.3283450000000
-60.9864000000000
-59.8558000000000
-61.1877131549316
-59.7939600000000
-59.4537000000000
-60.0268200000000
-59.2438500000000
-60.9013600000000
-61.6831000000000
-61.8126200000000
-60.8770750000000
-60.2645000000000
-61.5943782998143
-59.5850000000000
-59.9192100000000
-60.5244000000000
-61.1413450000000
-60.3333100000000
-60.4224300000000
-61.9269000000000
-59.8939250000000
-62.9541200000000
-61.1568730000000
-61.2175750000000
-61.1745000000000
-61.1600100000000
-62.5306400000000
-62.4743800000000
-61.5364500000000
-62.9791700000000
-62.8407900000000
-62.7190000000000
-62.5889900000000
-62.4836000000000
-61.0698400000000
-61.0246900000000
-60.6883700000000
-62.3900000000000
-60.6304000000000
-60.4652800000000
-60.5400000000000
-60.5138250000000
-61.9414500000000
-60.8616550000000
-60.6780030000000
-61.2446700000000
-60.7991460000000
-60.1500700000000
-60.5502541962100
-60.5516316641600
-60.7234369288900
-60.7873930143200
-60.7900631307700
-60.7941518633500
-60.8877298762500
-60.8926004960800
-61.1282588763500
-61.1888897250045
-61.6469452445600
-61.6469518625100
-62.9428665038800
-62.9448723553300
-63.1082114640000
-63.1083947671200
-63.2510615420300
-63.2512585898500
-63.2517606278700
-63.3767144339661
-63.4873590108700
-63.4875419979300
-63.7617874968900
-63.9094890569400
-63.9096721575598
-63.9761445353084
-64.0079151077324
-64.3924175788077
-64.5849212943160
-64.8046264174766
-64.8594631362524
-64.8857688685309
-64.9365048892576
-65.0417670858980
-65.0528444023542
-65.1371912885975
-65.1768151767160
-65.1960738485915
-65.2149455355634
-65.2334777874382
-65.2516807242832
-65.2516706886451
-65.2695455283064
-65.2695523836382
-65.2695647479169
-65.2870954081769
-65.2871128587246
-65.3043482400909
-65.3213148048111
-65.3213407984628
-65.3544098531657
-65.3705529582376
-65.3864442709193
-65.4020780783941
-65.4020880074004
PROTON
NEUTRON
LAMBDA
SIGMA +
SIGMA 0
SIGMA XI 0
XI OMEGA TAU
3He
3H
11Be
11Li
13C
13N
15N
15O
15C
17N
19F
19Ne
25Ne
29Si
29P
31P
32P
47K
57Fe
71Ge
75Ge
77Se
81Sr
89Y
91Y
99Mo
103Rh
105Ag
107Ag
109Ag
111Ag
111Cd
113Cd
113Sn
113Ag
115Sn
115Cd
117Sn
119Sn
119Te
123Te
125Te
127Xe
129Xe
129Cs
+
0
0
+
0
0
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
UT
-56.7972100000000
-56.7881790000000
-56.7727000000000
-56.5573400000000
-56.5492200000000
-56.5400000000000
-56.5331437599520
-56.5206620000000
-56.5213636370992
-56.4534412526953
-56.4317834369934
-56.3898364916422
-56.3910000000000
-56.3805610000000
-56.3793067500000
-56.3695100623385
-56.3673800000000
-56.3495857316369
-56.3397702485058
-56.3300531205989
-56.3194000000000
-56.3108418113617
-56.1922000000000
51
-63.7695000000000
-60.9039900000000
-60.8045000000000
-62.5224100000000
-62.1780000000000
-62.0207200000000
-61.2137000000000
-61.2656590000000
-62.0188200000000
-62.6901300000000
-63.2853000000000
-61.0430000000000
-61.1643000000000
-61.1977400000000
-61.2271200000000
-61.2364520000000
-60.0681300000000
-60.0615000000000
-60.0641000000000
-61.0912550000000
-60.8202000000000
-61.0629700000000
-62.1546000000000
-65.4021083320572
-65.4174906980464
-65.4326462193556
-65.6485400476448
-65.6605957843342
-65.6725135156219
-65.6725157413237
-65.6842952616044
-65.6842958641700
-65.7522182485425
-65.7738760642805
-65.8158230096402
-65.8158326215851
-65.8260413465997
-65.8260419964807
-65.8361494389404
-65.8461636789406
-65.8560737696481
-65.8658892527664
-65.8756063806458
-65.8852542116051
-65.8948176899003
-66.0196859253811
129Ba
131Ba
133Ba
165Tm
167Tm
169Tm
169Er
171Yb
171Tm
183W
187Os
195Pt
195Hg
197Hg
197Pt
199Hg
201Tl
203Tl
205Tl
207Pb
209Po
211Rn
239Pu
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The nuclei highlighted with magenta in Table 6A are radioactive and show their distribution as
compared to the stable nuclei. Since there are many stable and radioactive spin-1/2 nuclei with no measured
magnetic moments, this list is far from complete. The color cyan was used to show unstable particles, and
yellow was used for the semi-stable neutron. Everything else is presumed to be stable, unless new data
shows otherwise.
Because some of the spin-1/2 nuclei are radioactive, it appears that magnetic moments are not
influenced by the radioactive nature of the nuclei, but are merely a function of the mass. The fact that
radioactivity occurs in isotopes all along the periodic table indicates that stable nuclei are surrounded by
ones that cannot become fully stable. Some isotopes are completely stable, some are only somewhat stable,
and some are quite unstable. This varying stability is characterized by the large range of half-lives found in
the various nuclei, with each lifetime decreasing as it departs from a stable point on the curve and with a
point of no return at the high end of the mass curve where all isotopes finally become unstable. The trend of
magnetic moments to spread out and the curve slopes to switch from negative to positive with increasing
mass also suggests a limit to the torus mass.
It seems clear that relationships exist between particles and nuclei regarding their magnetic
moments, whether or not they have any similarities with other particles or nuclei. This is in stark contrast to
what the SM provides with accumulations of protons and neutrons in atomic nuclei. However, this magnetic
moment relationship is true even if viewing nuclei as a collection of quarks instead of just protons and
neutrons. It seems easier to believe that the relationships are because of a direct connection, as in the
proposition that spin-1/2 nuclei are particles, rather than the assumption that the magnetic moments just
happen to fall into these power curve patterns. If nuclei showed a slightly different pattern or fit other
UT
52
curves then the argument that it is just coincidence might be valid. With 65 curves fitting 80 particles and
nuclei, this is far beyond any possible coincidence.
Every curve in Figure 6.1 has only three magnetic moment values associated with it. Even though an
additional point may appear to fit within a particular curve, on closer inspection the points actually fit better
into a different curve. Sometimes the curves are very close in slope value, and other times they are not. It is
this closeness to detail that permits the adjustment or elimination of each particle or nucleus for inclusion
into, or exclusion from within, a particular curve, as was the case when four points were attempted to be fit
into some of the curves. As an example of how close the measured values are, WebElements originally had
a listing for nitrogen 17, which was fit with great difficulty into a single curve. After the value was retracted
and a replacement found, the new value fit with some of the listed nuclei into multiple new curves. It seems
this new value is correct based solely on how well it fits into the power curve scheme, lending credence to
both the value and the scheme.
The only remaining problem is to determine what mathematical formula determines why each
particle or nucleus fits within a curve at its specific location. It may be that QED (quantum electrodynamics) or QCD (quantum chromo-dynamics) theory has already determined the answer and it can be
easily adapted to UT. Surely the math exists, so that someone in the physics or mathematics communities
should be able to perform this determination and answer the question. No renormalization should be needed,
as it only appears because the Cc wave speed is not a factor in the Standard Model. Renormalization is the
term used to describe the need to eliminate infinities that arise in SM calculations. In any event, the math
should be relatively simple now that the power curve relationships have been exposed.
The idea that there are just a few stable particles is incorrect. The evidence for additional particles at
the core of atoms is convincing if you look at atomic nuclei in the same light as particles. The idea that
atoms can have cores made from particles other than protons and neutrons can be applied not only to spin1/2 atoms, but also to other spin values. If other particles exist that can make atoms then they can be
combined just like protons and neutrons. Making atomic nuclei from particles requires a lot of specialized
information and is something that is not fully addressed in this book. There are, however, some fundamental
relationships and an example or two that can be given to justify the assumptions being made, but again that
will have to wait for a later chapter.
UT
53
Chapter 7
Are Mass Increments the Key to Nuclei?
“I believe Einstein was on the right track. His idea was to generate subatomic
physics via geometry. Instead of trying to find a geometric analog to point particles,
which was Einstein’s strategy, one could revise it and try to construct a geometric
analog of strings and membranes made of pure space-time.” Michio Kaku
Another problem with the Standard Model besides the ones listed in Chapter 1 concerns the lack of a
reasonable explanation for the mass increases found between certain groups of atomic isotopes. One would
think that if protons and neutrons were responsible for constructing nuclei then there would only be mass
increments close to one of their values, but that is not the case. Figure 6.2 shows that some particle groups
(pairs or trios) have relationships where there is a slight increase in mass but a significant increase or
reduction in magnetic moment. If this were purely a function of quarks, why do atomic nuclei show
increased or decreased separation between the magnetic moment points with mass units that are not even in
simple quark units? In addition, why do they show similar separations in mass increments but not magnetic
moment increments? The answer to these mysteries may depend on the previous proposal that some nuclei
are just bigger particles, or at least combinations of particles that are each bigger than the proton or neutron.
Some of the data presented in Table 6A was used to generate Figure 7.1, which plots the magnetic
moments of particles and nuclei, beginning with the proton/neutron pair, against their mass differences. A
straight line connects only pairs or trios of particles or nuclei having mass increases smaller than that of the
muon between them. The series data that follows the table are formatted in the following manner: they are
ordered from the lightest to the heaviest, followed by whether the mass increase is on an ascending (A) or
descending (D) magnetic moment curve, then the mass difference between adjacent items, and finally the
ratio of the masses of adjacent items. Series 1 is the same magneton curve as given in Figure 6.1.
Series 2: proton and neutron – D – 2.305570E-30 – 1.001378
Series 3: sigmas (+/0/-) – D – 5.6689E-30, 8.710E-30 – 1.002674, 1.004097
Series 4: xi (0/-) – A – 1.144468E-29 – 1.0048825
Series 5: 3He and 3H – D – 3.30000E-32 – 1.0000066
Series 6: 11Be and 11Li – A – 2.589296E-29 – 1.001414
Series 7: 13C and 13N – D – 9.288816E-29 – 1.004320
Series 8: 15N, 15O & 15C – A – 4.910553E-30, 1.251050E-29 – 1.000197, 1.000502
Series 9: 19F and 19Ne – D – 5.773328E-30 – 1.0001830
Series 10: 29Si and 29P – A – 8.810985E-30 – 1.0001831
Series 11: 111Cd and 111Ag – D – 1.848180E-30 – 1.0000100
Series 12: 113Cd, 113Sn, 113Ag – D – 1.285E-30, 2.318E-30 –1.0000069, 1.0000124
UT
54
Series 13: 115Sn, & 115Cd – A – 3.458902E-30 – 1.0000181
Series 14: 129Xe, 129Cs, 129Ba – A/D – 2.1253E-30, 4.3506E-30 –1.0000099, 1.0000203
-60.0
-62.0
-64.0
-66.0
-62.0
Series1
Series2
Series3
Series4
Series5
Series6
Series7
Series8
Series9
Series10
Series11
Series12
Series13
Series14
-61.0
-60.0
-59.0
-58.0
-57.0
Figure 7.1
The most telling information shown is the ratio of masses. Series 5 and the first half of series 12 are
very close, as are series 9 and 10, and series 11 and the first half of series 14. Considering that the mass
increments are all over the place, not to mention the large range in atomic number, this is an impossible
condition with quarks unless you use some variable quantity to make up the difference. The introduction of
binding energy may fulfill this requirement to some degree, but the reasoning behind inventing such an
unusually flexible quantity has no empirical basis. It also does not explain why the ratios are very close for
just these three cases.
Another point of interest lies in similar jumps between elements in different groups of series. For
instance, if we compare series 5 to series 12 (first separation), the jump is exceedingly close to 7E-6, and
series 11 to series 14 (first separation) the jump is about 10E-6. Even series 2 and 6 are very close. Looking
at the group of series from 11 through 14, the ratios are all in the same range, with values of 7, 10, 12, 18
and 20 times 1E-6. While this information loosely fits with the quark/binding energy scenario, a better
explanation is that sets of masses can be added or subtracted from nuclei that are based more on a
percentage of the total mass. This is evidenced in the long list of measured isotopes, where mass is shed or
acquired to create new nuclei. The amount of mass varies only with respect to what will make the nuclei
stable or at least semi-stable from an ultrawave perspective.
Discovering what happens with the shed or acquired mass is crucial in understanding the
construction of nuclei. An attempt was made to reconcile the masses with existing particles, but not enough
UT
55
refinement of values is available to the general public, not to mention the availability of data showing the
range of masses possible for bosons and neutrinos. First attempts however, did not show any significant
relation to existing particles.
The Standard Model would have you believe that atomic construction is all a function of quarks,
binding energy, and electron counts. How is it then, that it has not provided detailed compositions for all
isotopes after all these many decades of use without relying on the adjustable parameter of binding energy
that is used liberally? The only answer is that it cannot be completely correct. It would be nice to believe
that the Standard Model’s proton and neutron combinations for creating nuclei are correct, but if quarks
don’t exist, as UT dictates, how can anyone be expected to believe it? Deciding between the Standard
Model and UT regarding atomic construction is not easy at present based on such a dearth of information,
but a physical system like UT should be given more weight when making such a decision.
An interesting fact that was almost missed about these mass relationships is the atomic numbers. If
you look at them from a purely numerical standpoint, you will see that there is a magnitude change. 11Be,
11Li, 13C, 13N, 15N, 15O and 15C have a factor of ten added to them to jump to 111Cd, 111Ag, 113Cd, 113Sn,
113Ag, 115Sn, and 115Cd. The same is true for 29Si and 29P, which jump to 129Xe and 129Cs. The only
set missing is the higher order ones for 19F and 19Ne. It could be that this is just coincidence, but with so
few items in the list to begin with that seems unlikely. It is not clear what, if any, significance is implied by
this magnitude jump. Is it only numerology at work? There are many such coincidences within the
calculations used to define particles and their behavior that imply that it isn’t just numerology.
To sum up the mass issue, it appears that mass ratios are at least as important to nuclei construction
as are mass increments. This can be seen by comparing the mass ratios between differently sized nuclei,
such as 3He over 3H as compared to 113Cd over 113Sn, where the difference is very close though not
identical. The same is true for 29Si over 29P compared to 115Sn over 115Cd, which are also close but not
identical. What this implies is that something more important, such as physical relationships between
magnetic moment spins, is the real control mechanism for particle construction.
One situation that occurred during the construction of Figure 6.1 that is worth mentioning has to do
with both mass increments and magnetic moment increments concerning the three neutral family members,
the neutron, sigma0, and 13N nucleus. If we look at the mass differential between the positive and neutral
families, then we would expect the nitrogen 13 nucleus to be the third family member in the neutral family,
since it fits best with the mass increments between positive and negative members. 15N was actually used
as the third member nucleus for a time because the amount of correction needed to make the sigma0 fit a
power curve between the neutron and 15N was less than that of the curve which included the 13N nucleus.
Finally, a correction was made to the 13N nucleus on Webelements.com where its value was obtained, so
that it then fit better than the 15N nucleus; therefore the 13N was the correct nucleus after all. It was
heartening to see such design consistency in nature, which should make deciphering its secrets easier, as
long as we believe that there is simplicity and logic within the apparent madness of the quantum world.
There is a chance that some other mistakes were made along the way concerning the correctness of
which nuclei to use in a particular curve, or a mathematical mistake due to some unknown factor, such as
transposing a number wrong. As was stated in Chapter 6, this was a long process and there had to be a limit
UT
56
to the amount of work performed. This fact makes the likelihood high that mistakes or judgment errors were
made. The hope is that any mistakes will be found quickly and corrected by those who feel this type of
information is of vital importance to their work. Simpler ways such as computer programs should be used to
perform the curve fitting that was done here by trial and error.
It is hard to imagine anyone vigorously denying that some nuclei appear to fit within the same
framework of mass, magnetic moment and charge as particles after seeing the data in Figures 6.1 and 7.1.
The spin-1/2 family of particles may be much bigger than anyone could have imagined if these suppositions
are correct, but so far all of the evidence is circumstantial. There will need to be more detailed analysis
performed to determine if this is truly the correct path to follow. Some who are not ready to accept a radical
new idea like ultrawaves will find some excuse not to even look at the data, which is a pity. For those who
still doubt ultrawave theory after having come this far, you should ask ourselves this question. If UT can
produce numerically accurate results, even if you believe it is from a flawed premise, doesn’t that make it
good enough to use anyway?
UT
57
Chapter 8
Reconciling Magnetic Moments with Particle Sizes
“The electric field in this picture form discrete Faraday lines of force, which are to be
treated as physical things, like strings.” Paul Dirac
Because particles and atoms are so small, all of the information we have about them must be inferred from
the macroscopic measurements that are made with large numbers of them. It is with some hesitation that
describing how atoms are created is even attempted. The large discrepancies between different sources for
the size of atoms make it unlikely that reliable information is actually available. To make a decision based
on this lack of consistent and credible information is probably foolhardy, but some possible scenarios will
be presented. Describing how atoms are created involves some speculation and guesswork; try to be kind
and forgiving if any of these conclusions turn out to be misguided due to bad information.
Several different types of bonds are required to create atoms. Covalent bonds are one of the simplest
and most important in fixing atom sizes. The following information about the size of covalent bonds was
taken directly from WebElements.com.
Definition
When two atoms of the same kind are bonded through a single bond in a neutral molecule, then one half of
the bond length is referred to as the covalent radius.
Units
pm [picometers]
Notes
This is unambiguous for molecules such as Cl2, the other halogens, and for other cases such as hydrogen,
silicon, carbon (as diamond), sulphur, germanium, tin, and a few other cases. However[,] for oxygen, O2, the
situation is less clear[,] as the order of the oxygen-oxygen bond is double. In this case, and indeed for most of
the periodic table, it is necessary to infer the covalent radius from molecules containing O-O single bonds or
from molecules containing a C-X bond in which the covalent radius of X is known.
The data for s-block and p-block elements is broadly consistent across a number of sources but note that the
values quoted for N (70 pm), O (66 pm), and F (60 pm) are sometimes less than those quoted here. Also the
value for hydrogen is sometimes given as 30 pm. Sometimes sources give the values for the Group 1 and
Group 2 metals as larger than those given here.
It may be necessary to treat the values for the d-block elements with some caution. Values are not often given
in most sources.
Ultrawave theory relies on the assumption that the number of electrons present in an atom is much
less than currently believed, and it is bolstered by the fact that nuclei can be treated as containing the entire
mass of the atom. If current numbers of electrons are used, then the magnetic moments of the atom as a
whole would require some sort of direct transfer of the magnetic moment of the electrons to that of the
nuclei. Presently there is no real mechanism to allow the direct transfer of magnetic moment force in this
fashion. If it were possible, experiments would show atoms in a magnetic field as having electrons in a
single plane orbiting around the equator of the nucleus. Current theory presumes that not even a single
UT
58
electron can be considered as fitting within any orbital pattern around a nucleus and is described as an
electron “cloud,” with the only patterns being the shells 1s, 2s, 2p, etc.
There is an obvious question that arises about the creation of atoms regarding ultrawaves; if atomic
particles are made from string waves, how does an electron orbit a proton? The SM supports the belief that
electrons can somehow attach themselves to all atomic nuclei to make atoms. A lot of evidence exists
indicating that this is indeed how atoms are created. UT does not discount this idea or its implications but
instead modifies the principles of what is considered an orbit in the particle realm. Because particles are
made from two-dimensional waves or superstrings if you prefer, the forces between them are transferred
through physical contact by partially decoupled forms of these waves. It then follows that the idea of some
mysterious field, or “force at a distance” holding the electron in an orbit must be wrong.
UT asserts that the strong and weak nuclear forces, as well as the electrostatic force, are all
generated by string waves in some form or fashion. The difference in scale between these forces is just a
matter of the local sizes of the interactions. The forces involved in particle interactions are always due to
one or more of these three forces. The two nuclear forces are confined to the torus, and the electrostatic
force wanders at various distances, depending on environmental conditions.
Simple physical explanations can be employed to describe the three forces. The weak nuclear force
is the decay of overly heavy tori within the nucleus of atoms that eject parts of themselves to achieve
stability. Therefore, the strong force would be an indication of perfectly stable tori that must be sheared
apart through brute force. Electrostatic force (electrical but not magnetic) is represented by charge shell
interactions, which is the manipulation of W2s waves. Magnetic force may also be a product of W2s wave
interactions, but is not confined in the same manner, so it cannot be directly equated with charge.
All normal positively charged particles have different specific charge values and can only match up
with negatively charged antiparticles, also having different specific charge values. Since all particles have
charges of either plus or minus one, the only explanation is that sharing a single W2s wave creates one unit
of electrostatic force. The magnetic portion of the force comes from the spin of the torus as it interacts with
the charge wave and is much smaller because it is controlled by the shorter frequency of the spin.
Hydrogen is the lightest element, since it is supposed to have the fewest components. Rather than
trying to explain all of the myriad possibilities for how it can be constructed, it makes more sense just to
point out all of the conditions that must be met to adhere to what we know about hydrogen, and fit that to
what has been discovered about the W waves. First of all, we know that particles cannot be accelerated to
the speed of light without an infinite amount of energy applied, so this discounts the possibility of an
electron attaching itself to the W3 wave of the torus of a proton and hence being accelerated to light speed
around the proton. In reality, it would end up being an 1836 to 1 speed ratio, as the proton would react with
an opposite motion of rotation, so the true velocity would be less than light speed. That particular scenario is
very unlikely as you will see, as it causes other problems when combining elements to make compounds.
The second condition is that the configuration of the atom is such that the addition of photonic
energy adds to the physical size of the original atom. Evidence for this is found in the way the addition of
photons cause particular increases in the energy levels within an atom, and hence their mass. This is the
Standard Model view as well as the ultrawave view. Conversely, the frequency of an ejected photon, which
UT
59
allows the electron or other particle to return to a lower energy, is determined by how many energy levels
the particle descends from to reach equilibrium, or it can be viewed as the number of electron masses that
are ejected.
The third condition is a direct consequence of the physical configuration, which is the implied size
of each atom based on the size of various molecules that contain these atoms. A gas is quite different from a
solid, and hydrogen is a gas, therefore its physical makeup must be different in some fundamental way. All
elements have different phases, such as solids, liquids and gases even if some elements skip the liquid phase
and go right to a gas from a solid. When in the solid or liquid phase, the size of a hydrogen atom is at its
most compact, as are all atoms.
Size determinations of atoms may seem straightforward, but they can be tricky and depend greatly
on the 3D structures created. In Table 2 of the appendix is a comparison of the UT-calculated “ideal” size of
an equivalent mass torus versus the interpreted size of each element, based on atomic compounds. Only a
few of the first elements have covalent radii that are larger than the UT-calculated radii and by the element
Bismuth the size differential is more than reversed. Some of the discrepancy is because many of the
elements are not spin-1/2 (the sizes are not comparable), and some is due to the fact that these particular UT
calculations assume ideal spin, which is never the case.
The troubling feature comparing UT generated ideal nuclei sizes with experimentally measured
ones—whether or not they can actually be measured accurately—is that there is an average size discrepancy
of about 13-1/2 to 1, based on diatomic radii. This is more than an order of magnitude, and it is twice that
number for the covalent radii, which is actually the one we should be using to compare to the UT values. If
we are to assume that the measurements listed by the SM are at least close to being right, then corrections
are required to fix the problem and bring the two values together.
What should be changed to correct the overall diameter of each elemental atom? Of course, the only
two items that can change in an ultrawave-created particle are the torus cross section and the W3
wavelength, which is the spacing between each rotation around the cross section. Because the mass and the
speed of propagation are fixed, and if we consider the spin to be fixed, which also fixes the frequency,
meaning the torus cross section cannot be changed then we are stuck. This leaves us with a dilemma; how
do we change the particle size? If we assume that the circumference of the particle contains the whole W2
mass in a single circle, how can we reproduce a smaller measured diameter?
Although there may be more than one answer to this question, only one fits neatly with the way
ultrawaves have so far been shown to behave, and that way is in their motion. If the torus can be shown to
have a secondary curved motion that is compatible with its already circular motion, then the resulting
overall diameter of the particle can be shown to be smaller than it would normally be if there were no such
displacements. This is similar to what happens when a rubber band is twisted to the point that the turns
begin to double up on top of each other and make a bigger twisted cross section. Figure 8.1 shows what the
added curvature does to the configuration of the torus if the total length of the W2 wave and the spin radius
remain constant.
UT
60
Figure 8.1
Clearly, the proposed secondary motion must allow overall diameters that are twenty or more times
smaller than the ideal calculated diameters to fit with atomic nuclei measurements and it does. Only a
particular number of turns will fit into a specific overall diameter, giving us a check on the viability of a
particular particle or spin-1/2 nucleus containing specific values for spin radius, overall diameter, charge
radius, etc. One qualifier is that the W2s charge wave must be defined and able to fit into this new motion
scheme in a neat and orderly way. It is easily imagined that negatively charged particles have the curvature
of the secondary motion as opposite to that of positively charged particles; therefore, our ideas concerning
antimatter construction need not change.
Figure 8.2 is an enlarged view of a one-wavelength section of torus, showing the shape of the
secondary curvature. It is the profile view of one wavelength of a W3’s secondary motion that is adjusted to
show how the magnetic moment value can be larger than predicted. Below that in Figure 8.3 is the same
wavelength, looking into it from the end.
UT
61
Figure 8.2
Figure 8.3
UT
62
One interesting thing to note about the shape of the torus cross section as it gets twisted into the
secondary motion, as shown in Figure 8.2, is that the tube no longer appears round. The new shape seems
stretched in one direction and compressed in the opposite direction. What this shape-shifting manages to do
is to conserve the surface area AT, so that the value remains consistent over all particle diameters. Both the
shape change and the surface area retention can only be accomplished by keeping the W2s wave motion
within the cross section perpendicular to the overall diameter of the particle, and not perpendicular to the
secondary motion direction. This is actually the only possibility that fits logically with the W2 wave’s
overall behavior, making it a more believable scenario. Because it takes the same amount of time for the W2
wave to complete one revolution around the torus in this new way, rotational rates are also conserved.
Although magnetic moments are the only evidence supporting the secondary motion, adding it seems
natural and solves the magnetic moment versus size problem.
Calculating the new particle or nuclei radius using the magnetic moment as the basis puts the size of
most nuclei as less than the empirical covalent radius. The good news is that they are closer ratio-wise, with
the range being approximately three times larger to ten times smaller. A huge improvement over the ideal
calculations, it is possibly enough to compensate for molecular compound sizes if all other factors are taken
into account. Those other factors include the natural vibration of individual particles due to temperature, as
well as any size adaptations that may occur when combining with different atoms.
To account for temperature changes, the amount of atom vibration needs to be determined since it
affects overall atom sizes; therefore a common frame of reference is necessary to compare values derived
from different methods, or from different sources. All of the UT calculated sizes listed in Table 8B are for
ideal conditions at absolute zero. The covalent radii listed are from WebElements and are measured at
unspecified temperatures, but are more than likely at room temperature. The table shows adjusted sizes up
to the element lead, where stable atoms begin to disappear from the periodic table.
Table 8B
Covalent radius
1. Carbon 13
7.7E-11 m
2. Nitrogen 15
7.5E-11 m
3. Fluorine 19
7.1E-11 m
4. Silicon 29
1.11E-10 m
5. Phosphorus 31
1.06E-10 m
6. Iron 57
1.25E-10 m
7. Selenium 77
1.16E-10 m
8. Yttrium 89
1.62E-10 m
9. Rhodium 103
1.35E-10 m
10. Silver 107
1.53E-10 m
11. Silver 109
1.53E-10 m
12. Cadmium 111
1.48E-10 m
13. Cadmium 113
1.48E-10 m
14. Tin 115
1.41E-10 m
UT radius (adjusted)
3.399E-11 m
8.4547E-11 m
0.9101E-11 m
4.3086E-11 m
2.1143E-11 m
2.6401E-10 m
4.4715E-11 m
1.7411E-10 m
2.7065E-10 m
2.1067E-10 m
1.8307E-10 m
4.0218E-11 m
3.8447E-11 m
2.6039E-11 m
Ratio
2.2654168
0.8870814
7.8013128
2.5762236
5.0134828
0.4734661
2.5941865
0.93044527
0.49880036
0.72626612
0.8357496
3.6798996
3.8494847
5.4150159
UT
15. Tin 117
16. Tin 119
17. Tellurium 123
18. Tellurium 125
19. Xenon 129
20. Thulium 169
21. Ytterbium 171
22. Tungsten 183
23. Osmium 187
24. Platinum 195
25. Mercury 199
26. Thallium 203
27. Thallium 205
28. Lead 207
63
1.41E-10 m
1.41E-10 m
1.35E-10 m
1.35E-10 m
1.30E-10 m
1.75E-10 m
1.75E-10 m
1.35E-10 m
1.28E-10 m
1.28E-10 m
1.49E-10 m
1.48E-10 m
1.48E-10 m
1.47E-10 m
2.39E-11 m
2.2845E-11 m
3.2472E-11 m
2.6935E-11 m
3.0753E-11 m
1.033E-10 m
4.8928E-11 m
2.0313E-10 m
3.7007E-10 m
3.9254E-11 m
4.7294E-11 m
1.4748E-11 m
1.4605E-11 m
4.1096E-11 m
5.8895055
6.1720125
4.1573656
5.0121537
4.2271809
1.6940153
3.5766956
0.66460465
0.34588494
3.2608018
3.1504963
10.035109
10.133814
3.5770318
The five blue isotopes are the only non-radioactive spin-1/2 ones composing 100% of the element,
which make them the only truly accurate comparisons. Magnetic moment adjustments are based on the
proposed secondary motion indicated by Figure 8.1. The end column is the ratio of the WebElements listed
covalent radius divided by the UT-adjusted radius for each listed isotope. No adjustments based on
temperature are yet possible, so no other adjustments to size were included.
One important thing to notice is that the UT calculations produce consistent sizes. For example, the
SM covalent radii for Tin 115, 117 and 119 are all the same, and the UT-adjusted sizes are also very close
together in size. The same goes for Thallium 203 and 205, which are even closer together. Because all the
factors that affect nucleus size have not yet been fully defined mathematically, size differences are less
important at this stage than size consistency.
It may have occurred to you that this secondary motion will change some particle constants or
calculations due to the reduction in diameter, and it does. The most obvious change is to the charge shell
radius R, but when you look at the fact that all particles have different sizes anyway then it really doesn’t
matter except in providing the detailed numbers. It is the incremental properties of each W wave that
controls electrostatic behavior and not the overall length of the wave, so its total length is not important
either. The value r* will also change, but it too is not a controlling value and can be overlooked except when
dealing with attraction between atoms. The good news is that nothing earth shattering happens when the
overall diameter of a particle changes. As long as the spin radius matches the ideal radius based on mass
content, all other values remain consistent, because of their proportional nature.
Simple mathematics show that as long as the overall length of the charge wave matches the diameter
of the particle, the frequency of the charge wave matches the frequency of the secondary spin naturally.
Having to force fit the charge wave to the secondary motion is not necessary. However, any secondary
motion could affect either the position of the W2s wave, or its overall length. If the length adjusts to fit the
new diameter then some of the wave must be ejected, or it could be that the overall diameter matches that of
the available charge wave. If the length of the charge wave was such that it fit the size of an ideal particle
then the loop might get shifted or twisted, or any of a myriad of other changes are possible.
UT
64
The very fact that particles are created and destroyed all the time leads one to the conclusion that
charge waves are inherently linked to the spin torus. The difference in charge shell sizes for a particle like
the electron having a small magnetic moment anomaly is insignificant, but for the proton the difference is
considerable. The charge shell diameter could change to about three times its normal size if the worst case
scenario is imagined. For simplicity, it makes sense at this stage in the development of ultrawave theory to
assume that the charge shell is always equal to the particle diameter.
Figure 8.4
The objects in Figure 8.4 show the direction of rotation of a particle/antiparticle pair having
difference cross section sizes and overall diameters. This is probably not how it actually happens within a
single atom most of the time, as you will see in the next chapter, but this will do for now. The attachment of
different-sized particles with charge waves, either edge to edge as shown in Figure 8.4, or with cross
sections overlapping as described in the next chapter, appear to be the two best possibilities for how atoms
are built.
UT
65
It seems that antimatter status in the guise of charge is responsible for some, if not most, atomic
construction. Antimatter annihilation takes place due to the complete overlapping of oppositely propagating
tori and bound W2 waves; therefore, atoms should not have identically sized antiparticle pairs within their
nuclei. Since the ratio of the masses between any two particles is also the ratio of their cross sections, and
since it is the magnetic properties that make two identically sized antiparticles overlap then it is the size
difference between all other combinations that keeps them from annihilating. It is hard to visualize fully all
of the interaction possibilities between all particles, but the size difference must be a big factor in the
resistance to overlapping of the tori. Accelerators propel particles or nuclei toward each other, and most of
the interactions below certain energy thresholds result in rebounds, therefore it is safe to assume that tori of
charged particles do not easily overlap.
Matter/antimatter annihilation is a little tricky. As was shown in an earlier chapter, an atom can be
formed with an electron and its antiparticle the positron. What happens with oppositely charged particles
depends on their motions. If matter and antimatter have relative velocities below a certain threshold and are
oriented in opposite directions due to the presence of a magnetic field then an atom will be formed. That
statement is not true if the particles are already a part of a larger conglomeration of matter or antimatter. The
entire mass momentum needs to be accounted for in those instances, because it is so much easier to
overcome single particle charge with lots of mass momentum. There are however, no restrictions when
combining neutral particles of equal mass in this fashion.
One atomic mass unit is equal to 1/12 of a carbon 12 atom. Hydrogen (1H) has a WebElements
listing of 1.0078250321 amu with a standard deviation of four in the last digit; the NIST value using the
proton and electron mass sum is 1.007825046519. The WebElements value for hydrogen versus the NIST
average values for the electron is a 2.39426E-35 kg mass difference, which is less than 1/8000 of an
electron mass. This is well within the limits of tolerance for the hydrogen mass measured in atomic mass
units (amu). Either an adjustment is needed to bring the empirical measurements in line between these two
sources, or there is a reason for the discrepancy between them.
Calculations show that the combined mass of the electron and proton charge waves is 5.582E-36 kg,
which is too small to account for the difference. It is always possible that something such as this could have
some bearing on the discrepancy between the two sources, but there would have to be something missing
that has not been accounted for that would increase the amount of W2 wave mass. From the SM
perspective, the reason for the two inconsistent mass listings is only due to inaccuracies. For UT, there are a
lot of physical possibilities for the mismatch of values, but it is simpler to ignore them for now.
It seems logical to assume that the mass of a hydrogen atom is equal to the sum of the masses of the
electron and proton. Combining two particles edge to edge, such as the electron and proton, does not require
any special charge wave alteration, and probably no mass compensation. This gives us pause to wonder if
other such constructions exist in the atomic world. According to the SM this is the only special case, since it
is believed that all other atoms are built with neutrons that have a binding energy addition and compensating
mass reduction. If you believe the evidence given in Chapter 6 that shows there are more than just three
particles then you have to say that there must be other such cases where there is one positive or negative
nucleus particle attached to an oppositely charged single particle.
UT
66
One possibility for atomic construction that may have occurred to you is that a W2s charge wave
might insert itself between particles. The charge wave would have to adapt to both oppositely charged
particles, therefore, using the electron and proton for example, the particle sizes could remain unchanged.
An assemblage of these three items would look like Figure 8.5, with the W2s wave as a loop connecting the
two tori perpendicular to their rotations. If the whole atom rotated, as it seems it would, then the center of
mass would be offset slightly from the center of the proton because of the added electron mass at a
considerable distance from the nucleus. The effective atom radius would then be larger than the UT
idealized one, by an amount equal to the W2s wave diameter plus the mass offset. It is unclear how to
determine the diameter of the charge wave in this configuration.
Figure 8.5
Offsetting the spins of the two particles, the W2s wave must be considered a nearly solid loop in
Figure 8.5. It does not allow the particles to get either closer or farther away without any additional outside
influence. A difference in spin frequencies exists between the two particles and will make the larger of the
UT
67
two rotate about its axis more slowly than the smaller one does. This would make the charge wave have to
hold the two particles against any kind of torque differential, and that seems unlikely.
The orbital speed of each item shown depends on the ratio of the two particle’s masses and their
distances from the center of mass of the system. An orbital direction that is clockwise when viewed from
the top would result, and is opposite the natural rotation of the smaller particle. Additionally, because the
charge wave wants to move at light speed with the W3 wave, the individual speeds will be proportional to
the size of each particle so that an electron will have to travel approximately 1836 times faster than a proton
around the center of mass. The total sum of the spin rates must add up to the speed of light, therefore the
velocity of an electron would increase if it attached to larger nuclei with larger masses.
This construction possibility has some inherent complexities that are not only hard to verify but also
hard to justify. The complexities arise from the spins of the two particles about their axes, as well as the spin
of the charge shell about an axis parallel to the other two. Distortions of the two-particle system that would
occur from spinning them about each other at such a high rate of speed would be hard to pin down. It may
very well be that the electron would distort dramatically. It seems improbable that the W2s loop would not
also distort and lose cohesion with the tori of the particles. The idea that the two particles are separated at all
looks less likely after going through the exercise of how they would have to behave. The Standard Model
view of an electron somehow orbiting a proton is not workable under UT constraints. This does not even
take into account the unlikely condition that two such atoms could form any sort of stable attachment, the
pairing of which is how most gases form and is the basis for the covalent radius measurement.
Only if we can determine the sizes of other nuclei whose spins are not 1/2 will it be possible to place
size constraints on atoms to determine if the UT calculated sizes for the spin-1/2 nuclei are of the correct
magnitude. Because charge shells are the biggest factor in determining overall size, the configurations for
them must be determined precisely and the sizes made accurate. Fitting them to the nuclei will prove what
that configuration may ultimately be. The idea that protons and neutrons form all atomic nuclei is also
becoming less believable, as evidence provided by UT principles shows something quite different is
possible.
One complication in determining atom sizes is the fact that they appear to change size based on the
atom or atoms with which they are combining. Oxygen will therefore have one size when it combines with
hydrogen to make water and another when combining with carbon to make carbon dioxide, and yet another
when making iron-oxide. It is more than just the fact that these are each one of the three phases of matter; it
is that the oxygen atom will be a different size with each atom that it contacts. Every compound will have
components that must be measured accurately to determine the atom size within that compound. The good
news is that once it has combined into a compound, and a change in temperature occurs that changes the
atom size, it is an incremental change that remains consistent. The only exception to a consistent
size/temperature ratio is when the phase change of liquids to gases or solids to gases takes place.
When trying to picture how atoms achieve their apparent stability, ultrawaves that produce nuclei as
single spin-1/2 particles, at least in some cases, are more fully able to explain the stability. The most likely
candidate for the simplest type of atomic construction is the design of Figure 8.4. A few other types of
constructions will be examined in the next chapter.
UT
68
Chapter 9
Creating all Spin Types of Atoms
“Poets say science takes away from the beauty of the stars—mere globs of gas
atoms. I, too, can see the stars on a desert night, and feel them. But do I see less or
more?” Richard Feynman
The periodic table, originally created by Dmitri Mendeleev, is the current list of elements arranged in a way
that gives order to the many different sizes and masses of atoms. Certain groupings of atoms show similar
characteristics in their behavior. The Standard Model contends that the arrangement of these groups is such
that each of the atoms in a vertical column has something similar about its construction, as compared to the
ones above and below it. Similarly, the horizontal arrangement differs between atoms by supposedly having
different numbers of electrons in their valence shells. They are also grouped horizontally by mass (with the
exception of isotopes), until the next transition of size and behavior takes place and the next row is started.
The atoms in the left-hand and right-hand columns of the periodic table are those transition elements. Some
of these suppositions are true and some are not.
Ultrawave theory agrees that there are similarities within the components of each vertical column,
and that there are indeed differences between columns. What it doesn’t agree with are some of the
explanations for those similarities and differences. The similarities in the vertical columns are most likely
due to the charge properties but not necessarily with the number of electrons present, if indeed there are
even any electrons present in some atoms. UT proposes that each column moving from left to right in the
table has a core construction that differs from the ones beside it in some fundamental way that does not have
to be related to protons, neutrons or electrons. Since the longest row has thirty-two elements, this implies
that there could be at least thirty-two different types of construction.
Because the sizes of nuclei do not vary all that much, it makes sense to ask ourselves whether all
atoms could be particles or particle combinations other than protons and neutrons, and whether or not they
could be made without any attached electrons. Of course, neutral atoms would not have attached electrons
and would be one large neutral particle, or particle combination that left no external charge. Some atoms
would have a charge but no attached electrons, being just a large antiparticle. Any atom that showed a dualcharge behavior could have two electrons, but could just as easily have two negatively charged particles
attached that are bigger than an electron. Just these few particle configurations would produce no less than
ten different types of atoms.
Something that can’t be overlooked when dealing with the creation of atoms is spin. The particles
examined in earlier chapters were all spin-1/2 particles. Other possible types of spins produce different
shapes or configurations for atoms, but those configurations are not yet proven. The spins of most atoms
have been measured, and that data shows atomic nuclei having spin values in 1/2 increments from 0 to 15/2.
The zero value for spin is just a special case where the particles are arranged so that the spins cancel, and it
does not provide any indication of the size of the atom or the actual number of particles. Even number spin
UT
69
values must be made from at least the number of particles it takes to make that number in 1/2 value
increments. This does not mean that the spin necessarily indicates the actual number of particles in the
nucleus only that it is the most likely configuration. Atomic nuclei are therefore comprised of one to fifteen
particles, assuming that even numbers that cancel out are not equal to sixteen or more.
UT characterizes the range of spins for the nuclei of atoms somewhat differently than does the SM.
UT atoms contain from one to fifteen spin-1/2 particles in their cores. This is very different from the SM,
which indicates that there can be hundreds of particles just in the nucleus alone. Considering that particle
masses exist, which are larger than the proton or neutron, it is possible for there to be only fifteen particles
in the nucleus of an atom and still provide the necessary mass for heavy nuclei. The helion nucleus for
example, has been shown to be a single spin-1/2 particle; therefore, other atoms could also have nuclei
made from single particles. A proposition like this raises the number of stable particles from three to thirty
two, including the electron, proton and neutron, which is a much higher number than previously believed
possible. In reality, thirty two different basic construction techniques isn’t very many when you consider
that there are more than two thousand known isotopes.
Incidentally, a good place to see the large number of isotopes is at the government-sponsored
website at Brookhaven national laboratory, http://www.nndc.bnl.gov/nudat2, which lists all of the recorded
isotopes, regardless of stability. The stable nuclei count is 287, or those with half-lives greater than 10E+15
seconds. There are at least that many more with half-lives between 10E+7 and 10E+15 seconds, which may
also be considered stable when compared to the lifetime of a human being, which is about 2.841E+9
seconds for a 90 year old.
The eight different types of atoms times the fifteen possible particles in the nucleus create as many
as 120 different atoms. When you multiply these 15 spin-type atoms with the proposed 31 stable nuclei
particles listed in Chapter 6, excluding the electron, the total number becomes 465. While this is a fairly
large number, it doesn’t even take into account the possibility of combining different sizes of particles
within a particular nucleus. Because single particles can reside at the core of at least some atoms, and not
protons, neutrons or any combination of the two, then antiparticles larger than the electron may attach to
them. This creates a plethora of possibilities for atomic construction. Physical sizes and magnetic moments
also play a role in how particles combine, and may determine just how many of them can exist.
Our first line of attack is to determine the atomic construction of a few atoms larger than 1H
hydrogen. Skipping the deuterium (2H) and tritium (3H) atoms for the time being, let us examine the helium
3 atom (3He) to determine if its construction can be shown to match UT proposals. Final assemblages of
particles in creating atoms do not require any additional mass losses once all of the components have been
created. A mass difference exists between the sum of two protons and a neutron versus the helion of 3He,
which is accounted for in the Standard Model with the concept of binding energy. What binding energy
consists of or how it works is not clearly defined, since it varies greatly depending on the number of protons
and neutrons present. For UT’s determination, it is somewhat different; two electrons attach to the W4 torus
of a single helion particle, and no mass is lost except in the creation of the helion itself.
How do we know two electrons attach to the helion? If we subtract the mass of the helion
(5.006412E-27kg) from 3He (5.008234E-27kg), we find a difference of 1.8219 E-30 kg, which means that
UT
70
there are two electrons attached to the helion, or at least a mass equivalent to two electrons. UT does not
adhere to the belief that there must be an equal number of canceling electrically charged particles, only that
there are two different antiparticles present for each charge. Because a single charge wave attaches two
antiparticles, only the existence of the charge wave itself determines one unit of charge, therefore a helion
with two electrons attached is a valid arrangement. The charge cannot be the same for all three particles,
because only anti-particles attach, and the helion is negative, so there cannot be electrons attached to it. In
this case the two electrons worth of mass combine into a single particle that has a positive charge (let’s call
it a double-positron) and attaches to the helion helping to keep it stable. It does not remain stable outside of
the atom, which is why it is not observed in nature. Considering how other atoms stabilize at one or two
electron masses, this negative to positive charge change with mass increase occurs frequently.
In a normal Standard Model configuration, pairing two protons with two electrons means that there
is an overall neutral charge for the helion. The charge of the two protons cancels that of the two electrons
and all is well in atom land. What isn’t accounted for is the fact that the magnetic moment is negative,
meaning that the neutral particle is in control. SM construction uses two protons instead of two neutrons and
that just doesn’t work. Of course, the SM does not relate magnetic moment values being positive or
negative to the number or type of particles involved in the creation of an atom. They make matter from
quarks and don’t worry about physical extent, so there is a lot more flexibility in SM atom construction.
According to UT calculations, the full physical size range of the radius for liquid or solid 3He
extends from a minimum equal to the magnetic moment adjusted helion radius, plus one adjusted doublepositron diameter, or 1.127E-11m, to a maximum of an unadjusted helion radius plus an unadjusted doublepositron diameter, or 7.1227E-11m. Unfortunately, there was nothing to compare these numbers to, since
there was no readily available data for the rare 3He isotope. If the size of helium 3 is only slightly larger
than what the addition of two positrons can supply then a single positive particle of two electron masses
could well be the answer.
If a helium 3 nucleus were composed of two protons and a neutron of reduced mass, its diameter
range would have an upper limit of 1.4362E-10 m. Such a limit is predicted by the magneton moment for
the ideal particle sizes and is well beyond that of a single particle with the proper mass. If such a size were
to be measured it would not necessarily identify this particular construction, as something about gases
makes them larger than UT calculates they should be. Unfortunately, a magnetic moment value for a helion
composed of three separate particles is not easily determined, as it depends greatly on the way the particles
combine. Shortly, you will be shown a proton/neutron combination, and it will provide some proof as to
why the helion best fits the data for a single particle.
A triton nucleus seems very similar to a helion nucleus, with the only differences being the mass and
magnetic moment. UT views them as very different, just as the SM does, but with differences that are more
in line with those between the proton and neutron rather than with proton and neutron quantity differences.
The biggest difference between helium 3 and tritium is that tritium only has one electron attached. That is
why the SM lumps it with hydrogen even though the triton is a little heavier than the helion. Even though
the core is a single particle that is heavier than the helion, it is the triton’s larger magnetic moment that
UT
71
causes it to be smaller in diameter. It has only one electron attached because the core is a single positive
particle that is smaller than the helion.
Since only spin-1/2 particles and nuclei have been examined so far, what can be determined about a
spin-1 nucleus such as the deuteron? Magnetic moment values only rely on the small radius r of a particle or
particles, the overall size of the particle can be any value up to the ideal R value given in chapter three.
Since single particles are tori with certain sizes, if we stick two of them together the magnetic moment will
be the resultant magnetic moment of the physical combination of both particles. The actual deuteron
magnetic moment value is 4.331E-27 J·s, which is much smaller than that of either the proton or neutron. It
only seems logical that the magnetic moments subtract to produce the measured value. Other logical aspects
of the construction that fit with a classical point of view are that they combine in such a way as to keep the
spin-1 attribute, meaning they both spin in the same direction, and that there is a change in mass by the
ejection of W2 wave in one form or another. Even though there is the possibility that both particles could
lose mass, it is almost certain that it is only the less stable neutron that ejects mass.
Physical geometry dictates that the deuteron’s construction consists of a proton and a reduced mass
neutron with both in particular orientations. Actual orientation is dependent on the type of configuration.
The most logical construction suggests that the proton be oriented upright and the neutron oriented upside
down with respect to a magnetic field, so that they spin in the same direction. Tori could spin in either
direction at the cross section connection, but it is less likely. The reason is that the spin might sometimes be
measured as ½ and sometimes as 1 depending on the severity of the destructive interactions.
There is no condition where two particles can have the exact same spin and magnetic moment
without annihilating, unless a magnetic field controls the orientation, so only two possibilities exist for how
the proton and neutron combine. Possibility one is where the two particles overlap, one torus fits inside the
other, and present their mass tori as one nested unit with slightly different cross-sectional radii. They will
then rotate together without any space between them and the secondary curvature can still be perfectly
aligned. Possibility two is that the tori connect edge to edge, and rotate around each other to compensate for
the cross-section mismatch. There are several ways in which the two can attach edge to edge, but for
simplicity it will be assumed that it is a direct charge shell to charge shell connection with the tori oriented
in the same plane. Remember, as shown earlier, a side by side edge connection is much less likely.
Figures 9.1A and 9.1B show two overlapping and nested configurations. The first is where one torus
completely surrounds the other when presenting their secondary spins. The second version is where one
torus lies just outside the other at all times, also with secondary spins controlling the positions. It is less
troublesome to have one torus completely isolated from the other, as in Figure 9.1B; otherwise charge shell
problems could occur. Either way the magnetic spins overlap. It is impossible to rule out either construction
at this stage, so it is a toss-up for picking which way to go with nuclei constructions.
UT
72
Figure 9.1A
Figure 9.1B
UT
73
[Please note: none of the figures in this chapter are to scale in any of the dimensions shown.]
One deuteron construction that was rejected early in the search for how to create nuclei with spin-1/2
particles of various sizes was the one where both particles lose mass. In this scenario, each particle can have
any mass necessary to achieve the total, while also producing the other factors like magnetic moment and
size. This unfixed mass for the particles gives only a range of values, not a single set. It made no sense to
pursue such constructions that had no hope of determining specific values. Due to the fact that there are
many other possible ways to construct nuclei, it seems foolhardy to present any other ideas about how it
might be done unless information is acquired that supports a different approach. For the time being, it will
be assumed that the construction of the deuteron is like that of Figure 9.1A for a proton plus neutron where
the proton is unchanged in mass and proceed with explanations based on that assumption. Figure 9.1B
allows for charged particles to overlap without crossing charge shells during rotation, so that one will be
reserved for nuclei with more than one charged particle.
When creating a deuteron, only the lowered mass of the neutron produces values that allow the
preferred construction of 9.1A, which isn’t possible with a standard neutron mass and a reduced proton
mass. Also, the matching of spin radii seems to explain why part of the W2 wave is ejected from a neutron.
Now we have a physical explanation for binding energy that makes sense. What remains to be answered is
why this particular value of mass for the new neutron is needed. A somewhat larger or smaller neutron mass
would still have worked, producing similar conditions. Determining why all particles have their specific
masses is likely the key to understanding atomic construction and answering this question. The most likely
explanation for the deuteron is that the stable proton supports a particular mass of neutron to fit inside it and
then stabilizes it. The mass would be entirely unstable on its own, so that is why there is no evidence for a
particle of that particular mass. Fitting the reduced mass neutron outside of the proton would produce a
slightly larger size, and is a viable option, especially as it does not require the neutral torus to cross through
the charge shell of the proton.
All of the atoms we are looking at are gases. Behavior of gases must surely be a function of the
motion of the atom to achieve the sizes that are bigger than they should be. Several types of atom motions
are possible, but the two most easily imagined are rotational imbalances between components associated
with the edge to edge configuration, or pumping motions associated with both nested and edge to edge
configurations. The rotational imbalance would be caused by the smaller particle being unable to acquire the
proper size to remain in perfect rotational synchronization with the larger one. It would have to rotate
around the larger particle at a velocity that was the result of the difference in magnetic moment radii times
the speed of light. The pumping action would be due to the inability of the two components to completely
find a stable attachment size. They would cycle between two limits that in the end would give them the
appearance of spheres of changing size, both getting bigger or smaller at the same time. Both of these
motions would create a force of specific energy, which would be measured as gas pressure. While these two
proposals aren’t the only possibilities, they are the simplest.
Now that we know a little more about how particles can combine, let’s look at the electron and
proton when making a hydrogen atom and see what we can learn. Two possibilities exist for combining
UT
74
electrons and protons, one is the nested configuration like the one used to create a deuteron nucleus, and the
other is an edge to edge linkage. Unfortunately, a multitude of subcategories fall within these two
construction methods. It would make sense just to have the two become the same size, but the magnetic
moment tells us that they cannot. The next step is to eliminate all except the most straightforward and
simple one from each category to pursue. It rarely pays to make unproven judgments, but that was what was
done in this case. With the lack of detailed information about atoms, it was necessary to make a lot of
judgment calls about how to proceed. The following paragraphs deal only with what was considered to be
the two best possibilities from both categories.
We will start by listing some well known details about the hydrogen atom. A most important item is
the Rydberg value for hydrogen of 1.0967758E+7. Next, the limits of the magnetic moment for the electron
must fall between the lower limit of 9.27400915E-24 J·s, which is equal to the electron magneton, and the
upper limit of 2.94964567E-20 J·s where the electron’s R and r radii become equal. Finally, the atomic
radius of 1H has been measured to be about 2.5E-11 m.
A Rydberg change indicates that one of the constants of the electron changes slightly. It can’t be
related to the proton, since the Rydberg constant for the proton is 2.01494465523354E+10. If the mass of
the electron changed to 9.104424E-31 kg then the Rydberg equation works. This change does not affect the
value of alpha, only the Rydberg constant. The difference noted earlier between a proton plus electron mass
and a hydrogen atom mass was at most a 2.39426E-35 kg mass difference. This is not enough to account for
the reduced electron mass, so we must assume a reduction in the electron mass that makes it a new particle,
and an increase in the proton mass, which also makes it a new particle.
There is another reason to assume that the change in mass is the correct assumption; the atom size
hints that the proton is forced to its larger ideal size. Calculating all the sizes brings us to an edge to edge
radius for the hydrogen atom of approximately 2.3951E-11 m, which is only marginally larger than the
2.3925E-11 m radius of the increased mass proton alone. While this is not exactly the 2.5E-11 m
measurement, it is relatively close.
What has not yet been taken into account is the magnetic moment. In a typical edge to edge
configuration the proton must have a bigger magnetic moment radius than the electron. Its value must be
greater than 1H by an amount equal to the electron’s magnetic moment. A nested configuration like the
deuteron where the electron R radius minus the magnetic moment radius equals the proton R radius plus its
magnetic moment radius is not achievable, as the electron R and r radii become equal first. Sizes for the two
particles are naturally close by the time this limit is reached, and is far from the measured size.
What about the magnetic moment of the increased mass proton? Using the anomaly ratio listed on
webelements.com the value is only off by a small amount. The webelements listing is 2.7928456 nuclear
magnetons, which is just another way of saying its magnetic moment anomaly if it was a single proton, but
the calculated value indicates that it should be 2.7928448. Since no tolerance is given on the
webelements.com site, there is no way to know if this number falls within an acceptable range. The only
thing we can say for sure is that the electron does not contribute to the total magnetic moment if the mass
changes take place.
UT
75
When taken as a whole, this information about mass, size and magnetic moment effectively
eliminates an edge to edge configuration, unless it can be shown that the electron loses its magnetic moment
when some mass is lost, which is very hard to believe. According to the Rydberg value, the electron has to
retain its normal particle status to achieve the given number, so nothing strange is happening with it. That
leaves us with the conclusion that magnetic moments can be shielded just like charge. Only one possibility
comes to mind about how that might be accomplished; the wave responsible for magnetism does not
penetrate the charge shell. This means that the electron is fully contained within the proton charge shell.
An alternative view is that the electron and proton are so loosely attached that they do not have an
effect on each other, at least as far as the magnetic moment is concerned. What is measured for hydrogen is
just the contribution of the altered proton. The magnetic moment for the altered electron would probably
just be its magneton value, since it is being assisted physically by the altered proton. There is no way to
know for sure what happens with the altered electron, since it would have to be separated from the altered
proton and would then return to normal before it could be measured. This scenario seems more likely to be
able to build diatomic atoms.
Since hydrogen is a gas with a large molecular radius, it can be postulated that the electron spins
about the proton giving it a reason to be unapproachable. Now that this type of shielding has been
postulated, other reasons for the behavior of gases can be considered. Having two equally possible
alternatives for the atomic construction of single-atom hydrogen makes it more difficult to pick a direction.
A factor that may help in making the decision is ionization. If electrons are easily stripped from atoms—not
just hydrogen—then their shielding within the nucleus’s charge shell is a less likely prospect. This also
precludes the mass reduction scenario alluded to above, as lone particles do not exist with reduced masses.
UT does not adhere to the large number of electrons supposed to exist within atoms, so having a relative
few means that they must be accessible for ionization purposes. This suggests that loose binding of
electrons to nuclei, such that their magnetic moments do not correspond with the overall atom value, is
probably the correct construction.
One final consideration about how particles look when they combine involves the charge shell. If the
shell retains its full size as a particle reduces from a perfect torus down to the size dictated by the magnetic
moment anomaly, it would appear something like the picture in Figure 9.2. This proposed construction
would shift the charge sphere so far to one side for a particle like the proton that it would become like one
of those lopsided balls that bounce strangely when they are dropped. It wouldn’t be hard to imagine how an
offset shape like this could combine with an electron to produce an assembly with a motion that could be
interpreted as gas pressure. For now, there are still too many possible ways to build atoms. What we need
are a few more examples to try to solve and see if that helps fortify our decision making.
UT
76
Figure 9.2
Let’s jump to the regular 4He helium nucleus and see if we can figure out anything about its
structure. The only reason to assume that the Standard Model is not correct with its idea of a two-proton,
two-neutron nucleus is the fact that the mass is not equal to those four particles. Nor is it equal to the mass
of two deuterons. It was already shown that a spin-1/2 nucleus such as the proton has an electron attached to
it with little or no mass discrepancy in the proton. If other spin-1/2 particles are shown to exist, which the
power curve table suggests they do, there is no reason to expect that they require any mass loss either. From
this perspective it makes sense that other spin types would not require any loss of mass either. An atom such
as helium 4 should be constructed from particles that add together to achieve the mass total, but without
some type of magnetic moment measurement for the individual components, exact descriptions cannot be
made as to nuclei makeup, only suppositions.
Helium has two attached electrons and a spin of 0, which means that its nucleus has canceling
particle spins. Assuming that two particles with the same charge can exist within nuclei, Figure 9.3 shows
the simplest construction. It is possible to have two particles of the same type if they are oriented 180° from
each other, and they have slightly different masses. Identical antiparticles would annihilate unless they were
oriented with spins going in the same direction, but in that case the overall nucleus spin would be 1 and not
0, so we know this is not the case.
These suppositions are derived assuming a nested configuration. An edge to edge construction
would require opposite orientation of the spins to be able to rotate together, and would produce a spin 1
condition. The assumption being that no charge shells exist to prevent attachment in the first place. Edge to
edge seems rather iffy unless it can be shown that two deuteron-like pairs, except with spin zero, can exist
within the nucleus and also produce spin zero when combined.
Unequal size particle pairs would show spin 1/2 if only one of the two could be annihilated, which
would tell us a lot about the construction of helium. This spin-1/2 result is seen with hydrogen, which we
UT
77
know is composed of two different sized particles. 4He appears not to be composed like this simply because
it has spin 0. It might still be composed of four particles, but they would have to come in pairs of equal size
and two of the four would have to be annihilated simultaneously. Assuming of course, that the energy
released is equal to all four particles. If it weren’t then there would have to be a pair of spin-0 particles
leftover from the initial annihilation event and evidence of that should be seen.
[It must be pointed out that my understanding of how spin is measured may not be correct, which
would mean that my beliefs about atom construction could be totally wrong. Also, my speculation about
how particles that are attached together behave during destructive collisions may be just as wrong. That
means that four independently attached particles can present as one single particle of equivalent mass.
Either of these possibilities could negate much of the information included in this chapter.]
The top construction pick for helium is shown in Figure 9.3 where the nucleus is simply two
identical overlapping particles rotated 180° from each other. The nuclei particles would be neutral ones but
with spins opposite that of a neutron. Alternatively, but not shown, there could be one positive particle and
one neutral particle with spin the same as a neutron. For the neutral particle pair, there is only one electron
attached to each particle; therefore, the electrons also have to be rotated 180° from each other. Each side can
be viewed as a particle/antiparticle pair rotating in the same direction, so this configuration is very stable.
Any rotation from an external perspective is equal for the electrons, as they will rotate at the same rate being
attached to equally sized nuclei particles. Because the rotations are opposite within the core of the assembly,
there will be no magnetic moment and the spins will cancel. The cancelling of magnetic moments and spins
is also true for the electron pair. For the positive and neutral pair nucleus, there would be two electrons
attached to the positive particle at 180°, but that construction would look almost identical.
There is a small chance that during destructive collisions, only one of the two nuclei particles will be
destroyed, leaving a single particle with one or two electrons attached. The likelihood that it will be two
electrons is small since the one attached to the destroyed particle is turned upside down from the way it
would normally attach to the nuclei particle, and would most likely be ejected before it had time to right
itself. Although the occurrence probability is low and the half-life likely to be short, especially since a
nucleus of half the mass is not listed anywhere, a destructive scenario such as this with a short lifespan
should be detectable with our current level of technology. It would verify whether or not the two-particle
nucleus construction is correct. On the other hand, in the positive and neutral pair scenario the two electrons
should be oriented in the same direction, as they are both attached to the positive particle, and that
construction should be just as easily verified by experiment.
UT
78
Figure 9.3
(The two particles in the nucleus are shown as separate and unconnected tori for clarity.)
Size is a key indicator of construction, and diatomic helium—two atoms of helium attached to each
other—has been measured with a radius of approximately 3E-10 meters, which is about ten times bigger
than the covalent radius of single helium. UT builds diatomic helium with about twice the diameter of
monatomic helium. One would think that a combination of mass, magnetic moment and adjusted electron
size that fits the 3E-10 meter radius is needed, but there is more to it than that. First, helium is a gas and
when it is at room temperature the vibration of the atom greatly affects its perceived size. It would make
sense to rely only on the accuracy of atom sizes when they are measured at cryogenic temperatures, i.e.,
when they are in a liquid or solid state. None of the web locations have information on covalent radii listed
in this manner. Without knowing if gases are measured at a particular temperature, any size given is suspect
and only a range of values can be used.
UT
79
Figure 9.4
As a solid, diatomic atoms should appear when pairs of single atoms attach without the nuclei
touching but with electrons as connectors. With only a slight shifting in the location of the electrons, it is
possible to attach two helium atoms together. The construction of each half of the atom is like that of Figure
9.3. The new construct for a connected pair would look like Figure 9.4, which shows how the two would
shift electrons. Although it is far from conclusive, this is what diatomic helium might look like using the
basic set of ideas presented in this chapter.
So how does ultrawave-constructed regular and diatomic helium stack up size-wise against the SM
proposed sizes for their covalent radii? It depends on what value you use. Different sources propose
anywhere from 3.2E-11 to 4.9E-11 m for the single atom radius, and from 9.3E-11 m up to 3E-10 m for the
diatomic radius. Unfortunately, the only way to achieve a size determination is to propose a magnetic
moment for the nuclei particles, thereby producing a calculation for a size that matches what is observed. A
value range of 3.776313E-27 J/T to 2.466164E-27 J/T is needed for the magnetic moment of each particle
to produce a measured size range for single atom helium of 3.2E-11 to 4.9E-11 meters. Assuming that the
helium is in a solid or liquid state, this puts the diatomic radius at twice these values, or 6.4E-11 to 9.8E-11
meters. Because we have not taken into account temperature or any other considerations about why gases
might be different, it is still too early to panic about any discrepancy, especially since the charge shell of an
ideal particle may always be retained. As far as UT is concerned, all atoms should be measured as close to
absolute zero as possible to put all elements on equal footing. If they are all solids, then the size differences
between atoms should depend only on their mass and magnetic moment values.
UT
80
The Standard Model view for 4He construction is that of two protons and two neutrons in the
nucleus. This is possible under UT, because as was noted earlier, neutrons are antiparticles to protons. Since
neutrons are less stable than protons, and protons have a high secondary anomaly motion, they will both
likely do some adjusting to match each other’s diameters and form slightly different stable particles, or at
least the neutron would. If this size and mass matching is true, then it is a good bet that the proton and
neutron could pair together to create four tori that overlap. They will be equal mass pairs that rotate in
opposite directions. The combination would look similar to the red and yellow tori of Figure 9.3; however,
there would be two pairs and one pair would be slightly bigger than the other. Having a pair of these
overlapping tori would still produce a cancelled spin for 4He, as long as one pair can still annihilate the
other pair even though they are slightly different sizes. The only problems would be determining how far
apart to make the masses, determining how to combine the protons into one charge shell, and figuring out
what happens to the lost mass. These are not easy problems to overcome.
What is needed is a simple method for determining whether nested or edge-to-edge is the correct
configuration to use in any given situation. Nested particles give a size that is closest to the NIST rms
charge radius of the deuteron, and so that seems the proper design to follow. On the other hand, the side by
side edge linked configuration may better answer some of the questions about the underlying mechanism for
the behavior of gases and seems to fit hydrogen better. The most likely explanation is that nesting occurs
with most nuclei components, and that edge linking may only apply to electrons or electron-like particles. It
could also indicate that a large size differential applies to the linked particles. In any event, each example
will need to be thoroughly examined before a final determination can be made.
Unfortunately, as you are now painfully aware, the possibilities for creating atomic matter are too
numerous to list. Without a great deal more information it is unlikely that a definitive answer can be given
as to how atomic construction occurs. Only those who are privy to the specific details and results of the
pertinent experimental data will be able to solve the puzzle of exactly how atoms are constructed, as well as
how many there really are. It could very well be that what we consider isotopes are actually completely
separate atoms, and that some supposed atoms are just combinations of several lower level nuclei. It should
be evident that the traditional view that an atom is created from a proton and neutron core surrounded by an
electron cloud is probably not correct, or at least not in most cases. The quantum theory view that quarks
create particles, or that mysterious forces exist as fields taking on whatever properties are necessary to make
the atomic system work, is not tenable in some cases.
You might be scratching your head at this point wondering what the point of this chapter has been,
since nothing concrete was proven. The point is that QT has implied that there are no physical answers that
explain the behavior of particles and atoms, which is why its unusual ideas have proliferated. By merely
applying a new construction technique to particles using a high level velocity to explain energy conversion,
it is possible to have so many physical construction possibilities that it is difficult to pick a single one for
any particular situation. You could also say that the point is that we now have a physically viable alternative
to the near mystical features of the current quantum mechanical system.
UT
81
Chapter 10
Redefinition of Particle Equations;
Fixing Constant Constraints
“It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If
it doesn't agree with experiment, it's wrong.” Richard Feynman
In the introduction, it was pointed out that p=mv was an equivalent explanation to Einstein’s E=mc2
equation, and worked better for defining particles. This equation change causes a need for redefinition of
energy when dealing with particles. All of the equations that follow represent some aspect of that
redefinition. There is no need to feel intimidated by their inclusion because you need not solve them, and
the independent symbols are easily understood. Everything of importance will be explained as it is
encountered. This chapter is merely an attempt to help you understand how ultrawave defined units fit just
as well, if not better, into existing equation structures than do the Standard Model units.
When mathematics was first incorporated into the ultrawave equations for particles, it was assumed
that Einstein and everyone else who followed him were right, and that fitting the units to their equations was
the proper course of action. It became obvious right away that Einstein’s equation for particle energy had to
be wrong, but it took a while to accept that Planck was also wrong. Even though particle surface area was
equal to his constant number in the SI unit system, the units had to be forced to work out for energy. It took
some time to see that using other unit systems would make the energy equations work out wrong. Finally it
became obvious that Einstein was wrong, and therefore Planck should be wrong.
When you square a number like c, it gives all kinds of answers depending on the units used. If you
give a unit of one to light speed then squaring it produces an equation that makes E=mc2 the same as P=mv
(when v =c), except, of course, for the units. It should be obvious that this is a problem, because a velocity
equal to a limit cannot have an additional velocity applied to it. Once this absurdity becomes clear, the
entire unit fitting that normally takes place and produces problems then becomes unnecessary. Many of the
equations simplify that once had required convoluted units to fit into the type of energy that the SM claimed
was being generated. It seems likely that if one had never heard of Einstein or Planck it would allow the
view that ultrawave equations are simple, straightforward, logical, self consistent, and self sufficient.
Everyone who now believes in Einstein and Planck must learn to do that too.
Particles are based on physical relationships in UT; therefore the biggest change to particle constants
has to do with what their units become when fitting them into this mold. There are actually two different
ways to look at the changes that occur. The first has to do with treating the mass number of a particle as a
spin increment and giving it units of seconds. This creates an atmosphere where particle mass constants
require the change from kg to s to determine the size and shape of the particle. The second way leaves mass
in place for all calculations that relate to force and energy transfer. This gives two values that are the same
in UT, but having different units.
UT
82
Table 10A
Constant or Combination
Mass
Spin Increment (f)
Length
Time
Electric Current
Electron Energy (e)
Torus Surface Area (AT)
Planck Constant (h)
SM Units
Kilograms (kg)
N/A
Meter (m)
Seconds (s)
Ampere (A)
A·s
N/A
kg·m2/s
UT Units
kg
s (seconds)
m
s
A
A·s
m2
N/A
Reduced Planck C. (ħ)
Spin ()
Magnetic Constant (0)
Electric Constant (0)
Fine Structure Constant ()
Electron Magneton (B)
Electron Mag. Moment (e)
Rydberg Constant (R∞)
Von Klitzing Constant (Rk)
h/2pi (kg·m2/s)
kg·m2/s
kg·m/(A2·s2)
A2·s4/(kg·m3)
Null
A·m2
A·m2
1/m
kg·m2/(A2·s3)
kg·m2/s
kg·m2/s
Null
1/m/s
A2·s/m
A·m2
A·m2
A4·s2/m3
m2/(A2·s)
N/A = “not applicable”.
Table 10A is a list of the constants and combinations of those constants that include the changes
mentioned. As you can see, the units for many of the basic fundamental constants when determined in UT
do not correlate with the Standard Model derived ones. What should be noted first is that the Standard
Model assumes that spin () and Planck’s constant (h) have the same units. In UT, they have nothing in
common, because the value associated with h is actually the torus surface area AT. Ultrawave theory will
produce similar units to the Standard Model when applying it to macroscopic matter, as long as the kg units
for the mass number are retained, but in determining particle sizes mass is not needed. Forming particles
requires that the frequency of spin rotation be taken into account. In this vein, it is necessary to use the
electron spin increment fe, which is numerically equal to the electron mass me.
There are a lot of SM equations that can be written so that the mass number appears more than once
in an equation and will cancel out, making it unclear whether seconds or kilograms are the desired units.
Sometimes it is obvious which it is, but other times it is not. Changing the way particles are defined, by
giving them a physical size and shape, makes the concept of energy in the particle realm different from what
is currently believed. All units in UT equations work out in the macroscopic world as long as you don’t
include the kg units in anything electrical or magnetic, and don’t include the spin increment (seconds) in
any equations related to mechanical force or momentum transfer.
UT
83
Although treating 2pi as just a number in the Standard Model can produce correct numerical results,
it is at the cost of requiring additional units for terms that do not require them in ultrawave theory. The
additional unit is a per meter attachment to at least two equations in the SM. We see why the SM equations
work using 2pi because in UT we can rearrange equations to make 2pi equivalent to me·Cc/Re, therefore
anywhere it shows up can be substituted for by those quantities and units, but only when it is appropriate for
that particular type of equation. Some of the Standard Model constants were only discovered by chance,
which means that there are no hard and fast rules about what the value stands for other than what it appears
to be, which in this case is the number 2 times the value pi. The value pi itself says something about how the
universe is made, and fits well with ultrawave rotations. Because ultrawaves always travel in circular paths,
they can be related in many ways through the use of the number pi, hence the SM got lucky by this
coincidence of nature.
The use of pi brings up an interesting detail about the reduced Plank constant. Since the reduced
Planck constant is h/2pi, substituting UT terms gives the equation:
h= 2pi·me·Cc·re / 2pi = me·Cc·re
This leads to the conclusion that what was thought to be h/2pi is actually:
L = mvr
Now it is easier to see that Einstein was defining energy for all mass that was simply momentum E=mv, and
then Planck came along and defined the energy of the spin-1/2 mass specifically as L= mvr.
There are many ways to calculate items that may give the correct answer, but the terms are
inappropriate for that type of equation. A lot of effort was made to calculate constants in different ways
using both UT and the SM units, and it was discovered that the SM is very sensitive to this type of constants
substitution, which changes units. UT always comes out the same no matter what way is used to get the
correct numerical answer, as long as you don’t try mixing frequency and mass in any particular equation.
This feature of UT alone should be evidence of its correctness.
An important equation of interest to atomic physics describes the fine structure constant alpha ().
Alpha can be considered a measure of the electromagnetic interaction, and deals with the energy differential
between spectral lines of hydrogen. As the energy scale increases, the value of alpha also increases, but only
the primary level needs addressing for purposes of the equations used in this book. The SM equation for
fine structure uses the magnetic constant  (mu-sub-zero):
= ·c·e2 / 2h
This form of the equation uses elements that indicate that the h we are looking at should be the one for AT
with units of square meters. The SM would not calculate alpha in the following manner, because they do not
expect these particular elements to fit together with the correct terms:
UT
84
= 1E-7·c·e2 / (fx·Cc·rx)
This is a good time to examine why the value of  was substituted for by just 1E-7, and why 
isn’t equal to just 4pi. It seems that  should be equal to 4pi, since we have discovered that 4pi = 2mv/R, so
why add the extra 1E-7 factor? The only reason that makes sense is the one that concerns its origin. The
magnetic constant was defined in terms of macroscopic interactions and is measured in Henries per meter.
The current definition of the ampere, which is the key to understanding the magnetic constant, reads:
The ampere is that constant current which, if maintained in two straight parallel conductors
of infinite length, of negligible circular cross section, and placed 1 meter apart in vacuum,
would produce between these conductors a force equal to 2E-7 Newton per meter of
length.
The effect of this definition is to fix the magnetic constant at exactly 2pi x 2E-7 H/m. Now it becomes clear
just what occurred. The magnetic constant was adjusted so that it applied to particles with a value of
Henries being in line with Newtons. If a new unit were to be applied that was in line with micro-Newtons
instead of Newtons then the 2E-7 factor would disappear. An updated version of the UT equation would be:
 = 2·fx·Cc / Rx
UT units would be kilograms per second if mass were used, but it is actually dimensionless because a spin
increment with units of seconds is what determines the relationship between torus components. This is quite
different from the Standard Model that requires all four major units: kilograms, meters, seconds and
amperes for .
In UT h is redefined, thereby redefining he new value is determined from substituting for h with
AT = 4pi2·r·R pertaining to the size of any particle surface. The UT form of the base energy fine structure
equation, which is derived from the SM equation, is actually:
= 4pi·1E-7·c·e2 / (4pi·fe·Cc· re)
= 3.35103E-16·e2 / (me·re)
which reduces to:
The 4pi portions canceled, which left just the 1E-7 part. In the end c over Cc reduces to just a number and
the units cancel, therefore we are left with units of A2·s / m. The only thing that was not included was a set
of units for the magnetic constant, which has been shown as unneeded.
How do we reconcile changing the units for alpha and the magnetic constant? First, let’s assume that
the lack of units for  is correct, and those derived by the SM are based on false assumptions. There are
many ways to calculate alpha in UT, but in the end they all produce the A2·s / m units. If we were to use me
instead of fe, which is essentially what the SM does then we get units of A2·s2/(kg·m). Normally, alpha is
considered a dimensionless quantity in the Standard Model, independent of the units used. When the units
UT
85
proscribed by the SM are substituted for in certain equations, such as those dealing with the Bohr magneton,
we end up with units of A2·s2/(kg·m), which are the same units that UT gets.
We can rewrite the equation for alpha to completely eliminate , after having explained the origin
of the 1E-7 factor. The units that replace the SM’s lack of ones that include Amperes restate the equation for
alpha as:
 = c·e2 / (me·Cc· re) = 7.297352568E+04 A2·s / m
This equation still gives the units that are needed, but it does it without the 1E-7 multiplier. A new name
should be given to this value to show that it is the changed alpha. Only the unitless portion of the equation
(1E-7) is needed to make the equation match the SM version, as long as it is understood that this requires
that amperes be replaced by a term that is reduced by the same 1E-7 multiplier. Regardless of how one
decides to use UT to get the answer, using  or not, either way produces a cancellation that allows the
correct units to be achieved. It should be noted here that the ratio of the torus radii for the electron is equal
to 1E-7, which is the actual reason that the value for alpha ended up with the value we see. If the ratio of the
radii for some other particle is used in the equation then the value of alpha changes, what doesn’t change is
the base value 7.297E+4.
A constant that is derived from the magnetic constant is the electric constant . The equation for the
electric constant according to the SM is:
= 1/(·c2)
The accepted units for are Farads per meter, which are the SM’s  units, combined with the m2/s2 of
Einstein’s c2. Substituting the UT constant of Cc for c2 and 2fv/r with no dimensions for , the UT equation
becomes:
= 1/(2·fe·Cc2/re)
(units are 1/(m/s), or s/m)
Continuing with the assumption that the UT-derived equations are at least as correct as the Standard
Model ones are believed to be, let’s see if there is an equation we can use to prove it. An equation that
shows the relationship of both and  to c is the reverse of the one for the electric constant:
c2 = 1/(·)
The Standard Model unit sets are 1/[((kg·m2/(A2·s2))/m)·((A2·s4/(kg·m2))/m)], which cancels out to final
units of meters squared per second squared. Everything works in the Standard Model, but it seems to be a
contrivance. Rather than just Henries or Farads for the magnetic and electric constants, it is necessary to
insert a per meter unit at the end of each one to make the overall combination come out right.
UT
86
In the UT equations above, where Cc is substituted for c2,  has no units, and  has units of
1/(s/m). When substituting UT units in the equation above, it becomes:
Cc = 1/(·) = 1/(null·s/m) = m/s
Since the UT units are correct, these equations and their units are consistent with treating the Cc value as a
velocity rather than a velocity squared, as was stated from the beginning of the book.
So are there other changes that need to be made to the constants units as dictated by UT? Yes, in
addition to the equations already shown, there are the Rydberg constant (R), and the Von Klitzing constant
(RK) equations. The Rydberg constant is a constant that makes use of the fine structure, and it should help in
our quest to determine if UT can supply good units. The Standard Model uses the accepted equation:
R = 2·me·c / 2h
(SM units are per meter, or 1/m)
Substituting UT constants for those above, we now have the equation:
R = 2·fe·c / 2AT
The same equation in units only form is:
R = (A2·s / m)2·s·m/s / m2 = A4·s2 / m3
It seems evident that the Rydberg constant relates to particle values only, so fe was substituted for me in the
equation. UT units do not match the SM units, but it is still a great result, with the Rydberg constant units
increasing over those of the fine structure constant units by a more reasonable A2·s/m2, instead of just a one
per meter increase.
The Von Klitzing constant is somewhat like the inverse of the Rydberg constant in the SM, because
it reverses the position of h in the equation. It demonstrates the quantized Hall Effect. This constant looks
simple but has units that are much more complicated than the Rydberg constant, at least from the SM
viewpoint. The accepted equation is simply:
RK = h/e2
(units are ohms, or kg·m2 / (A2·s3))
Substituting for h, the UT equation uses the AT constant with units of m2 and in units only form is:
RK = m2 / (A2·s2)
UT
87
The Von Klitzing constant now looks more like the inverse of the Rydberg constant as would be expected
from the way the constants are used, and with a reduction in terms that seems more appropriate to the
equation.
It seems clear from these examples that the constants that were derived with the c2 units of m2/s2
need to be revised to the Cc units of m/s. Most notably, the non-destructive single particle unit of rest mass
energy should be that of momentum p = m·v with units of kg·m/s. All destructive interactions between
particles will be the result of angular momentum transfer described by rho=1/2mvr with units of kg·m2/s.
The use of a time unit instead of mass unit appears to be applicable when determining particle size and
shape and for anything related to electricity and magnetism of single particles. While many equations now
replace me with fe, the change should only be applied as indicated above with the exception of the Rydberg
and Von Klitzing constants, which are for bulk materials. One must remember that the equations containing
this replacement are not to be used for determining physical interactions between single particles.
Many of the items listed in Table 1 of the appendix are for the macroscopic behavior of collections
of matter particles as atoms and therefore do not require updating. The ones that do need updating are the
ones that are derived from the introduction of m2/s2 and the use of mass where it is not related to force. We
should look at the subatomic realm not as being different from the macroscopic world, as we have been led
to believe, but very much the same. There are many more features that are alike rather than different; it is
just the 2D nature of the atomic world that seems abnormal from our perspective. Once the changes to the
units are examined, it seems like a much friendlier and simpler place.
Before ending the chapter, you should be aware that there is a relationship between the fundamental
constants of nature—this refers only to the four that were used in this chapter. This relationship is where the
constants fall into a particular range of values at the same time. What this implies is that refining even one
of them also helps refine the remaining ones. The reason is obvious when the values are examined when in
use. For example, smaller values of me require smaller values for e and h as well, especially when solving
sets of equations that use all of them at once.
Before being able to examine the reasons why the values are connected, we first need to look at the
basics. A few of the constants used in our calculations have already been set to exact values. The list
includes the speed of light in a vacuum, c, equal to 299792458 meters per second, and inversely the meter,
which is the distance traveled in 1/299792458 of a second at the speed of light. This implies that the second
is also a fixed quantity. The definition of the second is the duration of 9.19263177E+9 periods of the
radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium
133 atom at a rest temperature of 0° K. It is important that the unit of seconds be a fixed quantity, just like
the other two values, even if their definitions seem somewhat arbitrary. We have already seen the definition
for Amperes. The other two fixed values are the magnetic and electric constants.
The Standard Model equations that can be used simultaneously to determine constant variability are
all well known: the fine structure equation and the Rydberg constant equation, which we have just
examined, and the Bohr magneton equation, which is:
e·h / (4pi·me)
UT
88
The UT equation is:
pi·re2·I
The electron magneton (Bohr magneton) has units of A·m2.
The first step is to take the three equations just mentioned and set them to their median values then
vary one of the constants to see what values the other constants must have in order to retain that median
value. Later, the equations must be set to their maximum or minimum values and the process repeated. This
is done for the full range of standard deviations for the constants until one or another of the constants
reaches its limit.
The problem turns out to be which constant to use and why. When looking at the fundamental
constants as a set, and using the range of standard deviation associated with each, it becomes evident that
not all of the ranges are of the same magnitude. The one with the smallest magnitude of change will
obviously be controlled by the magnitude of the remainder. The following list shows the range of standard
deviations in ascending order, as well as whether the constant is factored or otherwise manipulated, causing
the value to increase or decrease.
e  ±3.5E-27
r  ±8.3E-31
me  ±4.0E-38
h  ±2.9E-42
squared ±1.225E-53
(SM = Tn)
Planck’s constant has by far the smallest natural standard deviation, but because the electrostatic charge
e is squared in the equation that determines the fine structure, the equations become sensitive to e at nearly
as slow a rate. Electron mass changes are less influential in affecting the Rydberg value and do not directly
affect Alpha, thereby allowing the full range of values of me to be determined quickly. It is best to use mass
when possible, but r is actually the least sensitive number and works too.
Compared to the NIST values, the discrepancies with UT values are not great and are of so little
significance that they do not warrant further study; however, the interesting part of the subject remains.
What is the likelihood that all of the constants will have values that fall toward either one or the other
extremes, but not both simultaneously? As proof of just how little variation is possible between the values
when viewed as a set, ratios of the values were compared. When viewed as a percentage of the standard
deviation, the deviation from a linear arrangement was less than 2%. An easily drawn conclusion is that the
2% is actually zero, and that the relationships are completely linear.
Why is linearity significant? The main reason is that it shows how a physical aspect to particle
construction is reinforced. Linearity is not considered pertinent in the SM if particles are made of quarks on
the one hand (protons and neutrons), and something totally different on the other (electrons). Linearity
becomes important when all constants depend on the value of any particle’s mass. For example, the 2006
UT
89
Codata nominal value for the electron mass was adjusted downward from the 2002 Codata value. If linearity
if truly applicable, the adjustment would force all the other nominal constant values downward, and that is
exactly what was seen. When adjusting the values from the 2006 Codata to the 2010 Codata, the values all
moved together in the upward direction.
The final reason, which is implied from a linear relationship, is the fixing of our universe with a
particular set of related values. These values control all aspects of its existence, and cannot be changed on a
whim. Their relationships have an effect on something called the anthropic principle, which will be
discussed further in a later chapter.
UT
90
Chapter 11
What are Photons?
“Thus I arrived at the following general idea which has guided my researches: for
matter, just as much as for radiation, in particular light, we must introduce at one and
the same time the corpuscle concept and the wave concept. In other words, in both
cases we must assume the existence of corpuscles accompanied by waves. But
corpuscles and waves cannot be independent, since, according to Bohr, they are
complementary to each other; consequently it must be possible to establish a certain
parallelism between the motion of a corpuscle and the propagation of the wave
which is associated with it.” Louis de Broglie
Even though he helped originate the idea of quantum light, Einstein found some aspects of quantum theory
counterintuitive and could never bring himself to fully accept them. Even Einstein’s own theories of general
and special relativity contain aspects that are not completely understandable in a common sense sort of way.
The only physical construction that answers the question of why both of these views of reality seem so
counterintuitive is to explain them using quantum string-waves.
Light is composed of photons and photons show many signs of being waves, such as the production
of electric currents within our eyes, which we perceive as images. On the other hand, photons also behave
like particles in that they can transfer momentum. Different frequencies of photonic energy can do things as
diverse as creating matter particles, destroying matter particles, applying force, heating food and in general
doing some things that defy the normal logic of what either simple waves or particles alone can do. The dual
wave-particle nature of light has confused so many scientists in the past that it is no wonder so many strange
ideas about it have proliferated throughout the years.
Electromagnetic radiation consists of everything from ultra-short wavelength gamma rays to very
long wavelength radio waves. Measured frequencies for photons range from about 3 hz to 10E+23 hz, with
visible light being only a small portion of the 10E+14 to 10E+15 segment. If photon wavelength is in some
way directly associated with ultrawave velocity then a physical size can be associated with an
electromagnetic wave. For a 3hz wave it would be an amplitude of ±0.5 light year, and for 10E+23 hz it
would be equivalent to a ±7.152E-9 meter amplitude. Light also has a second component with a wave
amplitude that can be associated with the c velocity, which should remain constant. This means that at a
frequency of 2.99792458E+16 hz the amplitudes of both of these wave components would be equal.
Only the amplitude changes when electromagnetic radiation is created, never the speed, so this
should be a hint as to how the creation of electromagnetic radiation must be accomplished. It should be
possible to determine how different wavelengths of light are produced and why only certain wavelengths
are emitted or absorbed by different atoms. If sizes of atoms can be determined then so can the wavelengths
of light that interact with them. Photon/atom interactions must be related to the variable amplitude portion
(ultrawave portion) of the electromagnetic wave propagation.
UT
91
It is merely conjecturing what photons look like and how they are created, but in this new theory of
matter and energy they must be mostly made of ultrawaves. Unfortunately, to formulate a physical
description of photons requires knowing details that can never be observed; they can only be surmised from
measurement data. One detail necessary to understanding photons is how vibrating materials affect the
production of photons of particular frequencies. The fields of microwave and radio wave production are
based on this property of matter and photon interaction. This intimate relationship between matter and
energy is the basis for many theories, not just Einstein’s E=mc2, but also his quantum particle view of the
photon that was an evolution of Planck’s quanta-like analysis of photonic energies.
Light’s odd behavior caused many strange ideas to be proposed in the early twentieth century. Some
of these ideas caused quantum and classical theories to butt heads during that time period, and kept an
ongoing debate between the proponents of these two factions, with Einstein on the classical side and Neils
Bohr on the quantum side. The conflict came to a head with a thought experiment by Einstein, Podolsky and
Rosen (called the EPR experiment) to show the incompatibility between quantum theory and the concept of
locality as dictated by relativity. It was based on an observation that the phase relationship of pairs of
photons traveling in different directions could be shown to have an effect on each other when the phase of
either was measured. Einstein, Podolsky and Rosen surmised that since the photons would be traveling in
opposite directions at light speed, this property could be utilized in an effort to send a signal that would
effectively be faster than light, which was wholly unacceptable to them.
Although no clear explanation was ever proposed in the 1930’s when the conundrum arose as to how
quantum behavior allowed non-locality, it was gradually accepted that it did just that, defying relativity in
the process. The photons used for the thought experiment are not just any random photons, they are
specially connected ones. QT calls this association of influence between distant photons, or any particles for
that matter, entanglement. By distant, what is meant is any possible distance from the size of an atom to the
size of the universe, but in reality it has only been proven at the relatively short distances inherent to a line
of sight application, and the further the distance the more it tends to break down. Without knowing the
exact details of a particular experiment, it is not possible to determine how or why the photons are
entangled, but that is not important for the thought experiment. What is important is that the entanglement is
the reason for the non-local behavior.
Because Einstein and his cohorts believed that there was nothing faster than the speed of light, their
argument was that this interpretation of quantum reality, as it relates to this type of measurement, had to be
wrong, because it violated the concept of mutual exclusion. Mutual exclusivity is the proposition that any
form of matter or energy cannot show both a wave and particle behavior simultaneously. Einstein and
company should have taken the advice of Sherlock Holmes more seriously, which roughly states: “When all
possibilities have been eliminated, then whatever is left, no matter how improbable, must be the truth.” The
truth is that the speed of the signal is indeed faster than light.
Ultrawaves produce entangled pairs of photons that exist in part as attached to a single W2o or W2s
wave, which are just two different descriptions for a wave traveling at the Cc velocity. Pairs of oppositely
propagating photons can be attached to—or more likely created with—the same wave and hence influence
each other if disturbed. Either photon can collapse and affect what becomes of the other one by severing the
UT
92
wave into two parts. Clearly there is a limitation to how far it will go before degenerating and letting the
photons lose their interdependence. At present, it would be hard to guess how far that might be. Suffice it to
say that the distance will be short if anything physical is between them, and may be rather tenuous even in
the relative emptiness of space.
An aspect of early quantum theory that was puzzling to both sides in the quantum-classical conflict
is contained within the Copenhagen interpretation, established by Neils Bohr’s group who resided in
Copenhagen at the time. It states that by choosing either a wave or a particle picture of the universe, an
experimenting scientist disturbs the “pristine nature” of matter and causes it to become either one or the
other. Their claim was that favoring one aspect over another creates a limitation of what can be learned
about the essence of nature. The limitations are expressed in Heisenberg’s uncertainty relations, Bohr’s
complementarity and the statistical interpretation of Schrödinger’s wave function. Together these three
ideas form the basis of the entire interpretation for the physical meaning behind quantum mechanics.
Ultrawave theory dismisses the idea that there is a difference between wave and particle so that the either/or
concept is eliminated.
One wavelike property of a photon is called phase. The phase of a photon can be compared directly
to the magnetic moment of a particle with the orientation in one of two configurations. Those configurations
are either right side up or upside down with respect to a horizontal magnetic plane. In the absence of a
magnetic field the orientation of particles can be in any direction, which is the same for the emission of
photons. For photons, if a transparent or semi-transparent material is polarized, meaning its atoms are
aligned in a particular way, it will tend to allow only a certain range of phase orientations to pass. This is
why no single transparent but polarized material absorbs light completely, it takes two or more of these
materials orientated at equally spaced angles to stop all of light’s possible phase orientations.
The wave nature of light is demonstrable when looking at the properties of the resonance band. This
resonance is in a straight line from the light’s source, except when a material object is placed partially in the
path of the resonance band. The object can be one that partially impedes the resonance band or a slit in an
object that is slightly smaller or larger than the resonance band. This leads to some interesting effects for
wave interference when light goes through screens or shutters or other slotted or regularly spaced openings.
An easy experiment that anyone can do is to go outside on a sunny day and look at the clarity of shadows.
Objects that are close to the ground have shadows that are much better delineated than those that are further
from the ground, such as large trees. This blurring of shadows clearly indicates only a partial interaction of
photons with matter.
A famous double-slit experiment typifies the unusual behavior of light and shows something real yet
logically improbable without an explanation that only a theory like ultrawaves can provide. Two small slits
were placed in a thin material. Light was directed through a small opening, and from there through the two
slits and onto a screen. The projected light produced a series of light and dark bands on the screen,
indicating wave interference. Knowing all we do about waves, this makes perfect sense. What didn’t make
sense was what happened if just one photon at a time was allowed to pass through one or the other of the
slits. Instead of the photons falling into one of two line-of-sight spots on the screen, they fell within the
same interference bands as before. This kind of behavior makes no sense from a wave perspective, because
UT
93
it takes more than one wave to induce the interference. Quantum theorists assert that the photons somehow
know the measurements being performed, as well as the past and present conditions that permit them to
perform this strange feat. UT asserts that this is a ridiculous notion, and that the interaction of the matter
within the object containing the slits, the existence of two slits, as well as the individual photons themselves
are what actually produces the interference, and not some unknown “intelligence” that is associated with the
photons.
Later experiments produced other strange results and yet other unusual explanations for light’s
behavior. Wave interference, as well as action at a distance, is inherent not only to light but also to
subatomic matter, which shows an interference pattern just like photons. The conclusion of real world
experiments was that light can travel in all possible paths to its destination. Multi-path light was used as the
ultimate answer to the EPR paradox. A multi-path concept of the entire quantum world, which seems to
explain the mystery to those willing to accept the farfetched ideas involved, has been adopted by many if
not most physicists.
That isn’t the end of the story for quantum theorists. The multi-path view eventually morphed into
the multiple universe theory. The “Many Worlds” theory, as it is usually referred to, states that all possible
universes exist, ones where you were never born, or many where Ross Perot became president of the United
States. Theorists claim that if the universe is only one possibility among an all encompassing multitude then
every possibility should exist at every moment in time. In other words, at every moment in time every
possible path is taken. It was necessary to take the idea one step further and make the claim that all of the
paths followed by light affect each other to the extent that the interference seen in the double slit experiment
is explained by the interactions of at least a portion of those multitudes.
Only when the unique ideas associated with ultrawaves are applied to photons, and not just to
particles, can all of the perturbations of light be explained without the need to introduce strange ideas.
Action at a distance, time travel, prescient knowledge, multi-path travel, and multiple universes are absurd
explanations for photon behavior. Real possibilities for how photons can be made to behave as strangely as
they do become physically tenable with the ultrawave velocity as part of the explanation.
Answers to simple questions such as the determination of the size limits to which the slits from the
double slit experiment continue to produce interference patterns with single photons should prove whether
or not ultrawaves can supply all explanations. For each frequency of photon, the size of the slits can be
altered to discover if the interference subsides with both upper and lower-size limits, which must surely be
true. The results of the experiments should give us an amplitude value for each of those frequencies. The
experiment should be performed with both polarized and non-polarized light to show whether there is a
difference in amplitude between the electric and magnetic portions of the photon, which should also be true.
Something that is taken for granted, yet has never been sufficiently explained, is the wave amplitude
of a photon. Think about it, if a photon is limited to traveling at the approximate ultimate speed of 300
million m/s then how can it have different amplitudes that are not faster than c? Any other wave that can be
described or shown in some physical form has an amplitude component that is combined with its linear
propagation to give an instantaneous velocity greater than its linear velocity. A photon must have some
velocity of motion for its amplitude, or there would be no red shifting of the frequency away from the light
UT
94
source or blue shifting of it when moving toward the light source. Frequency shifting can almost be
considered direct evidence that something must move faster than the 300,000 km/s light speed limit. It was
this realization that photon frequency and amplitude had no effect on photon velocity that caused Einstein to
invent the idea of constant c, which was then put into his special relativity theory.
Now that we have fully examined the wave nature of photons, let’s move on to their particle nature.
One attribute of a photon’s particle behavior is the suggestion that it transfers force from one atom or
particle to another. It does so by imparting a momentum to the receiving particle. It is also possible to
convert photonic energy directly into electrical energy using certain materials. Since the human eye
converts photons to electricity, it would appear as if photons are being converted directly into electrons.
This leads to the profound conclusion that photons are really just a new type of W1/W2 wave combination.
Not only does this make sense from the standpoint of how to convert mass to energy, but it also explains
how to transfer energy from one particle to another.
Photons composed of W2 string-waves can be used to explain why irradiating atoms with light can
make them bigger. Some part of an atom would have more material added to it, making it grow to its
maximum semi-stable limit. Different atoms have different energy states that are susceptible to absorbing
light of different frequencies. Changes in atom sizes are dependent on what parts of the atoms are being
altered; primarily it should be electrons. Once irradiation of atoms cease, re-radiation of the photons occurs
and the atoms return to normal after a particular amount of time that varies for each type of atom. The
length of time depends on several factors including the energy content of the photons expelled. It seems that
many if not all atoms are capable of expelling photons of at least one frequency.
It was shown in the first half of the book that the incremental unit of photonic energy, or Planck’s
constant divided by 2pi, relates to the surface area of the torus of any spin-1/2 particle. It is also well
documented that two photons with an energy content of half that of a particular particle can be combined to
produce that particle; it is therefore likely that the entire particle is being converted back and forth. The
mass of the particle must be hidden when it is converted into photons, much the same way that it is for
neutrinos, or how charge is hidden in the neutron. If photons are a new form of W wave propagation
associated with particle surface area, then what do they look like?
Photon configuration has to satisfy three distinct characteristics that are typical of light. The first is
its relationship to certain materials that have the ability to polarize light or in effect materials that possess a
certain orientation to their atoms. The second is the ability of photons to appear to overlap their spatial
coordinates. And third, the photon is a spin-1 particle and must have a shape configuration that produces
this spin value. Are there ultrawave configurations that satisfy all three of these requirements? The answer
is yes, there are several, but only one of them can be correct. The possibilities must be narrowed down to
determine which configuration best fits these characteristics.
Normal macroscopic circular or spherical objects are usually solid and have distributed mass, so no
matter how far you get from the center, the mass concentration stays the same. For a particle made of
ultrawaves this is not true, it is more like a steel hoop surrounded by a soap bubble as we have seen with the
matter-creating fermions. The simplest way to calculate mass is to imagine that it is concentrated at a single
distance from a locus—a center of rotation. Using spin-1/2 particle ideas, it seems that the most sensible
UT
95
answer for producing a photon’s spin momentum is likely to be through two circular, spherical, or other
such curvilinear motions.
Both sides of Figure 11.1 show two waves circling about each other, which produces spin 1 for
either of the following conditions. The first is when the diameter of rotation is equal to that of a spin-1/2
particle and where both waves are mass carrying waves. The second is when the diameter of rotation is
twice that of a spin-1/2 particle and only one wave is a mass carrier. The right side of the figure shows
opposite spin rotation, which indicates that anti-photons are created as easily as standard photons. One
limitation of this construction is that the two waves are not at right angles to each other. Since this
configuration does not provide all three conditions, it cannot be the complete picture for photon production.
Figure 11.1
Figure 11.2 shows a pair of waves traveling at right angles to one another, which is a necessary
feature for the electric and magnetic portions of the photon. Although this configuration matches the right
angle condition, it has a limitation. The limitation is that the waves must be single, non-rotating entities that
require some mechanism for making them change direction before the second half of the normal curved
motion that all W waves exhibit has begun. Furthermore, the wave is now traveling at roughly a 90° angle
from the brane where it makes its usual rotation. In other words, it makes a single point intersection with the
brane. The good news is that the full mass will be applied producing the spin-1 condition if both waves
carry half the mass, or even if only one wave is a full mass carrier.
UT
96
Figure 11.2
We know from measurements that the electrical and magnetic portions of the photon must be
oriented at right angles to each other. A configuration like this relies on the magnetic wave being
perpendicular to the mass wave at all times, whether or not their motion is linear or curvilinear. It is
necessary for both waves to switch locations about a fixed line in space, with each position of intersection
equal to half the amplitude of the wave. A simple change in motion such as this could be the answer to how
the electron, or any spin-1/2 particle for that matter, can change from a stationary particle into a moving
photon, or pair of photons, and back again with relative ease.
The real photon waveform is probably some combination of the two ideas shown in Figures 11.1 and
11.2. There is clearly some sort of collision needed to cause the transfer of the W2 wave creating an
electron to change direction and hence become a photon. The problem with the photon waves shown—
based on these simple ideas—is that longer frequencies should produce more impact force due to the change
in axial momentum radius, which is not what is observed. The actual experimental results show shorter
wavelengths producing more force. Combining both types of motions in such a way as to avoid this problem
is paramount in determining the correct photon construction.
Two possibilities come to mind about how shorter photon frequency equals higher force exchange.
First, in normal 3-D matter interactions, the angle of deflection mirrors the amount of force transferred from
one object to another. It may be that a similar transfer takes place at the particle level that is the result of the
equivalence to the angle of deflection, and hence actual force transferred. Second, but much less likely, it
may be that most of the reason for different frequencies of light being created is that different masses of W2
waves produce them. This would be a side effect of atoms being created by masses that were not in
particular units, so that the photons were not truly electrons but represented particles with a range of mass
values. If the structure of atoms can be fully defined, the structure may define photons as a side effect.
Finding such a relationship would be the ultimate two-for-one bargain.
If Figure 11.1 is reworked to show wave propagation parallel to the direction of motion then the
waveform is slightly different. It becomes more parabolic in shape as it maintains a constant forward
UT
97
velocity, while at the same time trying to perform its natural curving motion. It is not possible to show this
type of configuration on a 2D page, so your imagination will have to suffice. What is expected is that the
intersection of the normal W3 type wave will be turned 90° from the brane, and after each half wave (half
circle) it flips orientation and completes a half circle in the opposite direction. Why it flips is not clear, but
the reorientation must also apply to the magnetic portion, so merely by knowing that reversal of both
electric and magnetic features occur, we can reason that direction flipping is the only answer.
The most obvious answer to achieving wave reorientation is by utilizing the circular wave motion. If
the W2 wave departs from the brane, when it returns it will be oriented 180° from the way it left. Maybe it
is the re-intersecting of the waves with the brane after this 180° reorientation that causes them to leave the
brane in the opposite direction. Whatever the reason for flipping directions, parallel wave and brane
propagation is necessary to keep the waves together to achieve the constant c velocity for light.
Until the true nature of the W waves are completely determined, there is just not enough evidence to
completely define the changes necessary to show how particles change to photons. If photon construction
were like that of Figures 11.1 or 11.2, each photon should have a different velocity, or a higher force with
longer frequency, but we know that this is not the case. Overcoming problems like these requires complete
understanding of 2D wave propagation. It will also be necessary to understand what it is about the
string/brane interaction that makes the W wave want to curve in the opposite direction once it reaches a
certain point in its travels.
Recent experiments indicate that light can be slowed to a crawl. Without exact knowledge of the
experiment it is impossible to determine if this is true, or if there are other misconceptions at work that only
make it appear that it is true. If we believe that light can truly be slowed down then there must be an aspect
to its linear propagation that is affected by external matter and energy that alters its velocity. One such
change might be a compression of the 90° electric to magnetic orientation. As the angle between the two
components is decreased, so is the velocity. Alternatively, the portion of the wave combination that is
attached to the W1 wave somehow controls the velocity of the W1 wave rather than the other way around.
There are a few details of the experiment that we do know. First, we must realize that it is photons in
the form of a laser beam that are slowing the target photons. Second, determining how photons travel
through materials is also a key consideration. We know that it is the speed of photons through materials that
cause apparent bending of the light rays. Both of these considerations when taken together present a limited
number of possibilities for composing a physical system of photon motions. There are also only a few ways
in which the photons can travel through the target material, which puts further limits on a physical system.
Let’s consider for a moment how the speed of light has been proven to slow down to various
velocities depending on the material through which it is traveling. Let’s also consider the possibility that
this is a wrong assumption. What if the path the light is taking is such that the distance traveled equals the
delay measured, so that the photons are not slowed down, but rather have to take a longer path? This
presents the possibility that the path can be altered so that it takes a very long time for a single photon to
make it completely through a material. The fact that other photons are being used to purportedly slow the
target photons may indicate that it is they who are occupying the available paths, hence keeping the target
photons locked into a motion loop that prevents them from making it to their logical destination. Until we
UT
98
know for certain how photons travel through matter, we cannot be sure whether they can be slowed down,
or if they are merely redirected, which alters the time they spend within matter.
Light in a vacuum travels at the same speed regardless of frequency, or with red shifting or blue
shifting of those frequencies. This causes a lot of head scratching by the average person who finds it hard to
grasp the concept that a person’s motion does not add or subtract from light’s speed. Because light is
generated by vibrating waves that are attached to a medium through which they get their forward motion,
which is fixed to the extent that it can only travel at one velocity, they also cannot travel any faster or slower
than this medium. There is a change that occurs when an object emits light that is in motion relative to this
medium, it is the red or blue shifting that changes the frequency of the light. In effect, the red or blue
shifting of light is the same as changing its frequency. So, even though light is limited in its propagation
velocity due to being controlled by brane velocity, it is altered in some manner by the motion of the
observer or the emitter, and that change is seen as a change in frequency.
A photon moves with respect to the 2D brane to which it is attached, but can exist outside that brane
to some extent. What happens when an object that emits photons moves? Somehow it changes some aspect
of the brane/wave interaction to make the photon have a shorter or longer wavelength from an observer’s
perspective. This is true even if the interaction of the string with the brane remains unchanged. The human
brain is not suited to understanding what happens to the wave when it is not residing on the brane that
controls its motion, so we tend to think of it in terms of our 3D expectations and cannot fully visualize the
true conditions of its motion. We must accept the idea that the motion of an emitter or observer will only
change those aspects of the photon that affect its frequency, and rely on the fact that the velocity from brane
to brane cannot be altered.
Whatever form a photon has, it must fit within the W1/W3 speed-controlled framework of 3E+8
m/s. The frequency of vibration or propagation of W1 waves, or possibly W3 type waves, is always equal to
the speed of light. This immediately allows the fastest speed of all detectable wave interactions, of which
both W1 and/or W3 waves are always a part, to be no more than the speed of light. Therefore, it follows that
all electromagnetic radiation, which is the manifestation of these primary wave interactions, only travels at
that speed. A controlled speed of 3E+8 m/s tells us that the ultrawave portion of photons travel with an
orientation that allows an overall photon motion that is parallel to the propagation of the W1 wave. All other
motions can be in any direction, but only at a velocity that is equal to Cc.
It seems reasonable to believe that Einstein would be thrilled to learn that the strange ideas that
effervesced from the early view of the quantum have been redefined to the point where a classical view of
matter and energy, although with a quantum basis, are once again the norm. His distaste with quantum
weirdness, which I share, has been resolved as a misdirected belief in things more bizarre than is the actual
reality. He would also be surprised that his relativity theories are very accurate in how they fit with the
string/brane scenario of ultrawaves. His insight was right on the mark, even if he did not fully understand
why. There is no doubt that if Einstein had discovered ultrawaves before his death he would have reworked
his relativity theories to describe light and gravity in quantum terms that made sense of the observations, not
only to him, but to all who have puzzled over the true nature of light.
UT
99
Chapter 12
Proof by the Numbers
“Someone told me that each equation I included in the book would halve the sales.”
Stephen Hawking (Speaking about his book “A Brief History of Time”)
[If you are not interested in the mathematics behind ultrawaves then skip Chapter 12 and move on to the
second half of the book, which includes speculations that are not based on any mathematics, except where
included. If you have skipped everything else and come directly here, I urge you to go back and read the rest
anyway; it will help put everything into the proper perspective.]
Two conditions with respect to interaction with a brane determine what type of ultrawave is being defined
and what it will do. The first is when it is attached at only one point on the brane. In this configuration, it
will appear as a locus point on the brane, which is totally undetectable by any three-dimensional means, but
may still have effects on 3D objects. It is unclear at present what 3D effects may occur, other than
facilitating the creation of a W3 wave, or the conversion of matter to photons. The second condition is when
it is attached completely with all parts resting on the brane; it will appear as a rotating circle traveling at the
Cc velocity about a locus point, with that locus point being one of the single-point waves mentioned above.
As the brane advances, the ultrawave not only rotates at the Cc velocity along the surface of the brane, it
must also accompany the brane in its forward c velocity advance. This is the combination that is being
called a W3 wave structure. Because the W3 then rotates about another locus point, the overall structure of a
spin-1/2 particle becomes a torus. For simplicity, a torus shape is called a W4 wave structure. These
conditions apply to matter particles only, other particle types will have other conditions that must be met.
Once ultrawaves are defined, it is possible to show that E=mc2 is a coincidental outcome of having
an ultrawave constructed reality and should never have been applied to particle definitions. First, the motion
of an ultrawave is circular, therefore it should fit angular momentum equations, and it does. Being 2D, it
imparts its momentum in a quantum manner. It follows the simple equation used for spinning matter:
 = 1/2m·v·r
In this case v is now defined as the ultrawave velocity called Cc, and the radius r is the radius of the spin
cross-section. For some reason only half the mass is used in the spin momentum instead of the full mass that
we would see with macroscopic matter. It is assumed that the other half of the mass does not exist within
the torus, and instead may be what constitutes the particle’s charge. The UT equation for an electron is:
 = 1/2me·Cc·re = 5.2728581 kg·m2·s-1
The maximum possible kinetic energy of a particle is independent of ultrawaves and is equal to:
UT
Ke = 1/2m·v2
100
(v cannot be greater than c)
Einstein’s equation for rest mass energy is then equal to twice this value. If we consider that the equation for
momentum uses Cc instead of c then we have the coincidental arrangement that:
E = m·c2 ≡
= m·Cc
Therefore Einstein’s E=mc2 has the wrong units, and it is not quite numerically equal to the momentum.
Linear momentum works to give the same answer in other unit systems, E=mc2 on the other hand does not.
There is a coincidence in the SI unit system that allows the definition of meters, kilograms and
seconds to cause Planck’s constant to also have this same sort of coincidental numerical equivalence. It can
have the same value in both systems but with two totally different sets of units. The first relates to Einstein’s
equation, the second fits with the ultrawave equations. The coincidence between the UT number and the
Planck number can be explained by understanding that the second was fixed by using a change in electron
motion, and from there the meter and the speed of light were fixed based on the second; plus the fact that
the Cc velocity wave is nearly a factor of the c velocity wave. All the seeming coincidences are then the
result of the definition of seconds and the power factor. Actually, it isn’t quite that simple, it is first
necessary to fully define particles, which will then better explain the Planck constant coincidence.
Because of these coincidences, several interesting interdependences exist that are not at first
recognized as important. The first one is spin frequency, which one would think could have any value. If
you consider that the size of the W3 wave is a factor of the W1 frequency and that the W4 wave is a lower
factor of the W2 frequency, then it isn’t too difficult to understand what happens. When determining the
size of the W4 wave, the equations relating velocities determine which particle constant is being factored. It
so happens that the mass is the first number that reveals itself as a spin frequency increment. This produces
the following equation describing how to provide a physical description for any spin-1/2 particle. We will
start with the smallest particle, the electron, and determine its overall torus radius. The basic equation that
works for all particles is:
R = f·v/2pi
Where f is the spin frequency increment with units of seconds—equal to the mass number—and v is the
velocity of the ultrawave. The equation for an electron radius can then be rewritten:
Re = fe·Cc/2pi = 1.2955E-14m
If the standard c velocity is used then squared the value comes out slightly higher.
Re = fe·c2/2pi = 1.303E-14m
UT
101
This slight difference in values is enough to explain why the SM has not been able to discover the Cc
velocity. The problem with using a c velocity and squaring it is that there are other considerations that may
not be apparent as yet, not to mention the fact that the units will be wrong. Once the other particle feature
sizes have been determined it will become clearer why the Cc velocity and equations are correct.
Continuing with particle sizing, it is well known that the spin of all spin-1/2 particles (rho) is
5.2728581E-35 kg·m2/s. It should be easy to determine what radius produces this value. The equation is the
simple one for spin angular momentum, but with the positions of the radius and spin reversed:
rx =  /½ mx·v)
Where v remains the ultrawave velocity and mx represents any spin-1/2 particle’s mass. The equation for an
electron becomes:
re = / (½ me·Cc)= 1.2955E-21m
If Cc were exactly a factor of c then re would be equal to Tn (the natural unit of time). The misnomer
“natural unit of time” occurs because of the unit definitions currently applied by the SM, as explained in
Chapter 10.
Now that we have supplied physical criteria to the electron, what happens when these same ideas are
applied to the proton and neutron? The same two equations used to determine R and r using the proton and
neutron values produce results based on mass values of 1.672621776E-27 kg and 1.674927351E-27 kg
respectively, but with the mass values converted one-to-one to frequency units of seconds to determine R:
Rp = fp·Cc/2pi = 2.37879E-11 m
rp = /(½ mp·Cc)= 7.05569E-25 m
and:
Rn = fn·Cc/2pi = 2.382E-11 m
rn = /(½ mn·Cc)= 7.046E-25 m
It appears that mass numbers alone account for increasing overall radii with reducing spin radii.
What the increase in one radius versus the decrease in the other implies is a fixed surface area for the
torus. (Although there is no firm proof other than what is calculated for matter particles, it is likely that the
nature of 2D objects is to manifest themselves as constant surface areas, fixed velocities, et cetera, before
and after combining into 3D objects.) Calculations for the surface area of a torus applied to all three
particles give tori with a single value of:
4pi2·rx·Rx = 6.6261E-34 m2
UT
102
The values for all three particles are calculated using complementary numbers and so are all equal to the
Planck constant number. The SI unit system is the only one where this coincidence occurs. If one were to
use units other than the SI ones then the number would not equal the Planck constant number, or else it
would have a different exponent—i.e. when using the cgs system.
This coincidence is the reason that anyone familiar with the SM thinks that this is a mistake in UT.
What they must understand is that it is Einstein’s units that are a problem when switching between unit
systems. It is here that the coincidental unit definitions mentioned earlier can be shown as exact or just
merely close, as in the SI system. For instance, when calculating the electron rho value using inches rather
than meters the values are not at all close to what the SM expects:
Seconds: 9.109E-31 (this number will not change between the conversions since it represents the spin
increment and is independent of the units)
Velocity c: 299792458·39.37 = 1.1803E+10 in/s
Velocity Cc: 893591552·39.37 = 3.51807E+18 in/s
Radius R: 1.2955E-14·39.37 = 5.1005E-13 in
Radius r: 1.2955E-21·39.37 = 5.1005E-20 in
If one were to square c in inches then the number is 1.39310809E+20, which is quite different from what is
shown above. When substituting different units, equivalent answers are obtained for any UT equation used
in this book.
There are two equations that provide equivalent values for h; they are:
h = 4pi2·rx·Rx
h = 2pi·fx·Cc·r
(as shown above)
and;
Both of these equations give units of meters squared. These units do not match the units of kg·m2/s that are
normally associated with h, but they are nevertheless correct. There are ways to manipulate the ultrawave
equations to produce the expected units, but that would be a mistake. If Einstein’s equation is a
misapplication, Planck’s intention for h must also be wrong, since it has nothing to do with surface area. We
must give the surface area a different name, especially since it has a different value from h in all but the SI
system of units. AT seems appropriate for the area of a particle torus, therefore the first equation from above
should really be written for the electron:
AT = 4pi2·re·Re
What is called the reduced Planck constant h/2pi now has significance in UT. It reduces a non-significant
energy value to an actual energy value that is significant. The following equation leads to this significant
value, which is the minimum energy of electromagnetic interactions.
UT
103
L = mx·v·rx
Although most electro-magnetic interactions are the results of electron movement, or photons of electron
mass equivalence, the equation provides the same answer using any mass. What is interesting is that this
equation shows that the particle as a whole is involved in the interaction and not just the spin. This makes
sense because the particles—usually electrons—are attached to other particles within atoms, which transfers
energy based on the particle as a whole.
When a particle interacts destructively with another matter particle it transfers its momentum
through its spin radius. It is through the equation  = 1/2mvr describing momentum in a rotating torus cross
section. For any spin-1/2 particle the value is always the same, since r is derived from m, and v is always
constant. From this perspective it is clear that rho is the only value we need to associate with energy
transfer, therefore Planck’s constant can be done away with and replaced by AT. The reduced Planck
constant, on the other hand, can be shown as equivalent to L=mvr, which has the correct units. If anything,
the Reduced Planck Constant should have been the true Planck constant from the beginning.
You may think that all of this could be viewed as a manipulation of the units and equations to make
everything come out as desired, but what follows should not be possible if that were true. Now that we have
physical shapes and sizes for spin-1/2 particles, the remainder of the particle features can be calculated. The
first is the equation for the magnetic moment and/or magneton. Magnetic calculations for ultrawave created
particles use the exact equation currently being used by proponents of the ring theory of the electron, since
that is what spin-1/2 particles turned out to be. These equations are based on known macroscopic principles
determined from a century of work with electro-magnetic rings. Beginning with the Bohr magneton:
B = pi·re2·I
(using spin radius r rather than overall torus radius R)
where,
I = ·e/2pi
(2pi is just the number reducing e to relate to r)
and where,
 = v/r = Cc/re
(rotation frequency using r radius)
Calculating the same value for the proton magneton (nuclear magneton):
N = pi·rp2·I
(where the current is derived from  = Cc/rp)
Both equations give the expected measured value of the appropriate magneton. This applies to any spin-1/2
particle you can name. It is nearly impossible that this is also just a coincidence.
Now let’s go back to the charge wave proposal of earlier chapters. It was stated that the charge wave
is attached to the torus and travels perpendicular to the torus propagation. The only evidence for it traveling
at the Cc velocity is its appearance in equations relating Rx and Cc simultaneously. These equations include
the one that determines the current, I. It is easy to see by the terms used that the ratio of electrostatic force to
UT
104
magnetic force is then the same as the ratio of R/r, which shall be called r*. A comparison chart of ideal
versus actual values of r* for the electron, proton, and neutron are given in Chapter 3. The value generally
increases as the mass of each spin-1/2 particle increases. It is not clear at this time if this ratio has any
bearing on total particle charge behavior, since the charge shell changes size based on the magnetic moment
value, but it definitely has an effect on the magnetic attraction of particles, atoms, and hence molecules due
to an r radius increase for increased magnetic moment anomaly.
Clearly, the ratio of electrostatic to magnetic force dictates that the magnetic force constant be
merely a ratio:
0 = 2·fx·Cc/Rx·1E-7 = 12.566E-7
(unitless)
The 1E-7 contribution to the equation is the result of the definition of the Ampere to relate to Henries,
which are unequally sized units. It is due to the ideal (not actual) electron r* ratio being equal to 1E+7.
The mass number me is substituted for by the frequency component of the physical particle in
determining the magnetic constant. The magnetic constant would have units of kg/s if me were used, but it is
related to the spin increment having units of seconds, so all of the units cancel; therefore 0 is just a number.
Ignoring the mass is the only logical path to follow, since magnetism only has to do with spin and does not
seem to be associated with mass content in any way.
The equations above for determining particle sizes produce ideal values that assume perfect form
exists with the particle torus. It turns out that this is not the case; there is a secondary twisting motion that
must be accounted for that affects the size of particles, as was shown in Chapter 8. Departure from ideal
form is seen as magnetic moment anomalies. An anomaly exists when the values of r and R depart from
perfect and begin to approach each other in value. For an electron, the values of r and R are very close to the
ideal values. The difference is only a 1.00115965218111 times increase for re and a proportional decrease
for Re. Proton and neutron anomalies are much greater, but not prohibitively so; they fit well within the
physical constraint paradigm already established for the torus shape.
Since all particles have the same spin—which also produces the magneton—the spin radius must
remain intact during the secondary curved motion of the torus as shown in Figures 8.1 through 8.3. The
retention of the spin is achieved by keeping it oriented perpendicular to the particle’s locus point, which is a
very natural expectation of the behavior of 2D ultrawaves. Overall frequency, represented by the mass
number, is also retained because the secondary spin must be in synchronization with it for the particle to
remain stable, and is also a normal expectation. All further particle values should now be determined based
on the correction needed to produce the secondary curvature from the magnetic moment anomaly, and these
corrections will change the actual particle sizes.
There is a small possibility that a correction is needed as the anomaly increases due to a greater
angle of incidence between the secondary motion and its radius as compared to the W3 and its radius. It has
not been accounted for in any of the included equations, because the interaction of 2D components seems to
belie such a need, as is seen with constant photon velocity. However, at some point in the future when
UT
105
measurements are accurate enough, it can be determined with certainty if any corrections to the radii
calculated by magnetic moment anomalies are needed.
New sizes for the electron, proton and neutron respectively, can be calculated by multiplying or
dividing the ideal values by the magnetic moment anomaly:
re = 1.295531888767E-21 · 1.0011596521811 = 1.297034255147E-21 m
Re = 1.295531888767E-14/ 1.0011596521811 = 1.294031262592E-14 m
re* = Re / re = 0.997684723789858E+6
and for the proton;
rp = 7.055687186680E-25 · 2.792847356 = 1.970545730691E-24m
Rp = 2.378794340517E-11 / 2.792847356 = 8.517452037132E-12m
rp* = Rp / rp = 4.32238232509714E+12
and finally for the neutron;
rn = 7.045974880773E-25 · 1.915679693 = 1.3497830996E-24
Rn = 2.38207331592E-11 / 1.915679693 = 1.2434611719E-11
rn* = Rn / rn = 9.233682133E+12
This is obviously true for all spin-1/2 particles; therefore if some atomic nuclei are spin-1/2 particles, as
proposed in Chapter 6, their sizes will follow this same formula. The radii can also be calculated by
inverting the magnetic moment equation and using the measured moments.
There are a couple of unusual coincidences that occur in the SI system as a result of placing a
physical size onto particles; not to mention our newfound relation between frequency and mass. First, for an
ideal electron the circumference of the ideal particle is equal to:
Circei = 2pi·Re = 8.187E-14 m
Unfortunately, this is also has the same number as Einstein’s equivalent energy, which does not exist as
such in UT, but if it did, it would be the maximum particle momentum with units of kg*m/s. The actual
circumference of the electron is equal to the adjusted radius times 2pi; therefore, any confusion can be
eliminated:
Circea = 2pi·Re(adj) = 8.1776E-14 m
The other unusual circumstance has to do with the specific charge. In the SM, it is merely the charge
divided by the mass:
UT
106
e/me = 1.75882E+11 C/kg
In UT however, the numerical value of specific charge actually equals the ideal current:
I = ·e/2pi = 1.75882E+11 C/s
Again, the actual current is determined by the actual r value and will be slightly less than this ideal value.
It seems that the reason Coulomb’s force is so large is because the charge is determined by the speed
of the W2 wave, which is so much greater than anyone could have imagined. In any event, either equation
related to charge shows a physical relationship between the charge and the particle size. The SM does not
currently place a fixed size on particles, but it has remained a goal. UT fulfills that goal and supplies
answers in a physical way concerning electrostatic and magnetic effects on particles.
An important aspect to particles is their mass. Mass is something not clearly understood in the SM.
Ultrawaves are very much like cosmic strings in that they have an inherent associated mass content. The
total length of W2 wave containing mass can be calculated as equal to the spin circumference times the
number of times it circulates in the overall mass/charge circumference of the torus’s hoop. Using the
electron, proton and neutron mass values respectively, from the 2010 NIST Codata information, the
equations for the total length of W2 wave can be reduced to:
LW2e = 2pi·Rie·Cc/c
LW2p = 2pi·Rip·Cc/c
LW2n = 2pi·Rin·Cc/c
= 2.426310E-5 meters
= 4.455076E-2 meters
= 4.461217E-2 meters
Because calculating the number of seconds that the overall torus takes to rotate and the number of seconds it
takes for the spin to rotate, the two numbers are related by their relative velocities, which are c and Cc. The
overall string length works out to be the same as Cc divided by c times the ideal torus circumference.
[The following information is related to specific calculations given in earlier chapters, but is not in
sequential order with respect to those chapters.]
The equations from Chapter 10 are straightforward and were used to determine the range of
variations in values for the constants me, e and h. Two of the equations were shown already, those for the
fine structure constant and the Rydberg constant. The other two equations that were used represented the
Bohr magneton in terms of e, and the electron spin radius in terms of the electron spin increment (massequivalent frequency number):
B = re·Cc·e/2
re = AT/(2pi·fe·Cc)
UT
107
In Chapter 9 we discovered that particles can combine either by overlapping their spatial coordinates
and connecting outer edge to inner edge, i.e. nesting, or possibly by overlapping their magnetic moment
radii edge to edge. Nesting appears to be the mechanism involved in the creation of a deuteron by a proton
and neutron. It first requires that the neutron lose some mass, after which the two particles must match
secondary spins to travel in lockstep. To achieve this matching of spins simply requires fixing the frequency
by using an equation that combines the ratio of magnetic moment radii and the ratio of mass frequencies to
equal a value of one, or simply:
ADJpn = Rp/Rn·rn/rp = 1
Also, the range of values that satisfy this equation must be determined when the mass is fixed and the
magnetic moment radius is altered to fit the equation for AT, which was AT = 4pi2·Rx·rx. And, the final
limitation is for p and n to always be separated by the deuteron’s magnetic moment value d. Solving
these equations simultaneously reveals that a convergence occurs at a single point that has a value for the
proton’s R radius that is much smaller than the normal R value. The same equation set is used for the
neutron and produces a similarly small value for Rn.
Calculating a probable size for the two-particle deuteron nucleus required making assumptions. The
first assumption was the neutron mass change proposed in Chapter 9, which is also the Standard Model
view of how the combination should occur. The second assumption was that the proton and neutron
conform to specific overall radii that fit between their maximum or “ideal” sizes, and minimum allowable
sizes where their magnetic moment radii equal their R radii. This is the point at which the donut shape of a
particle has no discernable hole in the middle. The third assumption isn’t really an assumption but a
postulate, the resultant magnetic moment is positive, therefore the proton’s positive magnetic moment value
must always be bigger than the neutron’s negative magnetic moment value by 4.331E-27 J·s. Subtraction of
magnetic moments is a reasonable assumption considering that the proton magnetic moment minus the
neutron magnetic moment is 4.44E-27 J·s, which is already fairly close to the measured deuteron value.
The size of the deuteron charge radius was measured in 1998 to be 1.97535E-15 m. [A. Huber et al.,
Phys. Rev. Lett. 80 (1998) 468], which is smaller even than that of the electron. Acquiring such a small size
can only be accomplished by having the magnetic moment anomaly of the particle or particles that comprise
the deuteron be large. A size range is easily defined by characterization of a two particle system in either,
the edge linked configuration, or the nested configuration. The maximum size is for two edge-linked
particles of equal mass and ideal form, and the minimum is for a nested pair at maximum anomaly condition
when r* equals 1—Rx equals rx. A restraint on the upper limit of this range is applied when the positive
magnetic moment value is taken into account. The maximum size is reached using a reduced mass neutron’s
ideal size and a proton size corresponding to a magnetic moment radius equal to that of the reduced mass
neutron plus that of the deuteron. The lower limit remains that for the proton when Rp equals rp. Proton and
neutron particle values respectively are at maximum:
UT
Particle Mass
1.672621637E-27 kg
1.670961453E-27 kg
108
Magnetic Moment
9.38653704661E-27 J·s
5.05580190056E-27 J·s
Mag. Mom. Radius
1.303717797E-24 m
0.702212039E-24 m
R radius (adj.)
1.287397454E-11 m
2.390165475E-11 m
Since either of these values is much larger than the measured value, an edge to edge construction can be
ruled out if using the maximum or ideal size for the neutron.
At the other extreme, the minimum is reached at the following values:
Particle Mass
1.672621637E-27 kg
1.670961453E-27 kg
Magnetic Moment
2.94721868E-22 J·s
2.949454821E-20 J·s
Mag. Mom. Radius
4.096831550E-18 m
4.096831076E-18 m
R radius (adj.)
4.096831550E-18 m
4.096832023E-18 m
These values are just as far away numerically from the measured value as the maximums, but in the
opposite direction. In real terms, a scale model would show that they are much closer physically being in the
smaller direction, but the fact that the actual value falls in the middle numerically may be significant.
The first line of attack in determining the deuteron size was to determine if there were any special
values that fit a logical physical system that fell between the extremes. One configuration existed, a nested
configuration. The special configuration occurs when the neutron R radius minus its adjusted spin radius
equals the proton R radius plus its adjusted spin radius, while still retaining the correct positive magnetic
moment value. Physically speaking, the neutron torus fits just outside the proton torus and they touch.
Frequency matching occurs and allows the two particles to rotate in unison. This condition should have no
special meaning for an edge-linked pair of particles, but evidence not yet presented will show that similar
construction of spins may have some bearing on how edge-linked particles behave.
For the proton and neutron respectively, the special values that produce a deuteron are:
Particle Mass
1.672621638E-27 kg
1.670961453E-27 kg
Magnetic Moment
1.235068435E-22 J·s
1.235045127E-22 J·s
Mag. Mom. Radius
1.715442995E-20 m
1.715382644E-20 m
R Radius (adjusted)
9.784078398E-16 m
9.784421480E-16 m
Magnetic moment anomalies for this configuration are now of the scale 2.445E+4. There are no rules for
what magnetic moment anomalies should be, only that there are no single particles with such large
anomalies, at least as far as the Standard Model is concerned. It is different for UT, which has particles with
anomalies as big as 3.337E+2 for 205 Thallium. An important thing to note about these particle size
relationships is that their r* ratios are now only about 5.7E+4, or about 2-1/3 times their anomalies, which is
considerably less than the 3.42E+13 r* value for an ideal neutron.
The above nested particle construction has a radius of approximately 9.78442E-16 m, which is about
half of the measured value of 1.97535E-15 m for a deuteron. Either the particles can somehow link edge to
edge and use this nested set of values, or just as likely they force a shift in the AT constant to twice its
normal value. Such a condition automatically gives a spin of 1. That a nested value is extremely close to
UT
109
half of the expected value was surprising. Considering the starting size of the components, the very fact that
the sizes shown above were the only special set of values in the whole range of possibilities, makes
coincidence unlikely.
There were no considerations made for adjusting the overlapping or non-overlapping conditions with
respect to the actual size or shape of the torus, or to the size of the charge shell. There were also no
temperature considerations that could be a factor when dealing with any measured value. Just because the
calculated particle sizes when attached by their outer edges (edge-linked) are very close to the measured
size doesn’t mean we should assume that particular configuration is correct. It probably isn’t. We should
look at as many nuclei as possible to see if their values follow any kind of trend. Proposing the use of UT in
probing the atomic world seems the only logical path to follow at this time.
Combining an electron and a proton to create hydrogen also has a straightforward solution for a
nested configuration. Particle values that fit the nested configuration are for the proton and electron
respectively:
Mass
1.6726211417E-27 kg
9.109382145E-31 kg
Magnetic Moment
1.8308604451E-22 J·s
1.8307193845E-22 J·s
Mag. Mom. Radius
2.5429245695E-20 m
2.5427286472E-20 m
R Radius (adj.)
6.600285730E-16 m
6.600794295E-16 m
The listed Codata charge radius for a molecule of 1H is 0.8545 femtometers, which is almost exactly five
orders of magnitude smaller than a free proton as calculated by UT methods. The nested radius is 0.66
femtometers, which is even closer to the measurement than the calculated deuteron was to its measurement.
The spin for hydrogen, however, is not 1, so the radii should end up equal. In all particle combinations some
compensation may be necessary such as the angle of incidence issue mentioned in Chapter 11. No solution
has yet shown itself for how to determine the effect; only fitting to empirical data is possible at this time.
Finding that the calculated size of nuclei follows a linear reduction in size from the measured values would
at least give a correction factor, even if the adjustment mechanism were to continue to remain hidden.
Because magnetic moments and not masses are the only factors in these calculations, the Rydberg
value for hydrogen has no influence in determining particle mass, but it may have some relevance to other
features. We may be able to use this value in determining the physical size or structure of atoms like
hydrogen; ones with a spin-1/2 character. It is easy to imagine that being able to so easily determine the size
and shape of particles would also allow determining the size and shape of atoms.
In Chapter 11, photons were determined to be converted particles, and for the most part they should
be electrons. The size of a photon converted from a single electron and without spin reconstruction could
have a volume as large as the volume of an electron torus:
PV = pi·re2·2pi·Re
= 3pi x (1.2970E-21)2 x 1.2940E-14 = 2.05157E-55 m3
UT
110
The volume of an electron, on the other hand, assuming that a charge shell exists around it, is equal to the
volume of a sphere:
EV = 4/3·pi·Re3
= 1.333pi x (1.29403E-14)3 = 9.07659E-42 m3
Therefore, the minimum number of photons that would fit in the same volume as an electron would be:
EV / PV
= 9.07659E-42 m3 / 2.05157E-55 m3= 4.4242 E+13
Even though this is a large number, and doesn’t take into account the possibility that an electron produces
two equal-mass photons, there is a limit to how many photons one can pack into a certain volume of space.
The power output in total quantity of photons emitted from a laser for example, would then be limited to the
spatial volume of the output tube, and not just to the number of atoms present used to produce the photons.
Other factors to be examined, which were not included in the main body of Chapters 3 through 9, are
the Compton wavelength and the de Broglie wavelength. The Compton wavelength reflects the maximum
energy a photon can have that still allows a measurement to be made when determining the location of a
particle. In the SM, Compton wavelength equations are based on particle masses and the h that represents
particle energy, or:

C = h / (m·c)
Unbeknownst to current practitioners, it is actually accomplished by relating it to the particle size. In UT
form, h will be substituted for by AT, or its equivalent:
AT = 2pi·fx·Cc·rx
The electron’s Compton equation substituting frequency for mass and AT for h:
Ce = 2pi·fe·Cc·re / (fe·c)
(units are meters)
Clearly, the spin radius times 2pi is the circumference of the torus cross section, and Cc divided by c
represents the number of cycles of a waveform. What about using the other equation for AT? If we substitute
the other unit set equivalent to h, then the equation becomes:
Ce = 4pi2·re·Re / (fe·c)
(units are still meters)
Because the equation shown above is based on the spin radius, its value does not change as the particle’s
magnetic moment anomaly changes. If it were based on the magnetic moment radius then it would change.
UT
111
These equations make sense of Compton’s intent. If the purpose is to define the minimum size a
photon can have before it can disrupt the actual particle it is intended to illuminate, then it succeeds. Not
only is the torus completely described, the speed of propagation and the frequency are included. However,
what did Compton actually achieve? Since there are two constants with only one variable in the SM
equation, the variable is the controlling factor. In effect, it would seem that Compton has equated photon
frequency with particle spin frequency. From a SM view this may only be interesting, but from an ultrawave
perspective, this should tell us something about the physical construction of photons.
We know that the Compton wavelength for an electron is 2.42631E-12 m, therefore it is simple to
equate with an ultrawave length. An equivalent equation for Ce using just lengths and not mass frequency:
Ce = 2pi·rie·Cc/c = 2.42631E-12 m
Basically, this is the spin circumference multiplied by the frequency that results from dividing Cc by c. For
an electron, this value is roughly ten times bigger than its torus diameter of 2.59106E-13m. For a proton,
however, the Compton wavelength is about 12,906 times smaller than its torus diameter of 1.7055E-11m:
Cp = 2pi·rip·Cc/c = 1.32141E-15 m
What you should be noticing though, is that the cross section of the torus becomes the controlling factor in
the equations. The relationship reveals something remarkable, which is simply that photon frequency is
proportional to the particle torus cross section and not the overall torus diameter. Consequences of this
proportionality being, that for a photon to interact with a particle or atom, its frequency must be at least as
small as the circumference of the particle’s spin cross section times the Cc over c relationship. The Cc over
c relationship is likely a relationship that is just as important to photon construction as it is to the cross
sections of lone particles, or even particles within atoms. This physical connectedness should help in
determining both, the construction of photons, and the construction of atomic matter.
What about de Broglie’s contribution; can we learn anything about what he discovered? As a matter
of fact we can. The de Broglie equation has the form:
 = h / (m·v)
(which equals the Compton wavelength when v = c)
The UT equivalent equation for an electron is then:
 = 2pi·re·fe·Cc / (fe·v) = 2pi·re·Cc/v
This is the same as the Compton equation, except v is substituted for c, which then implies that the
frequency becomes infinite as the velocity falls to zero.
To apply this equation to other particles is straightforward, for a proton the value is:
UT
112
 = 2pi·rp·fp·Cc / (fp·v) = 2pi·rp·Cc/v
By comparing the two equations for the electron and proton, it is easy to see that the relationship is purely
the same as the Compton one comparing the sizes of the spin radii. The only difference is with the need to
have a velocity other than c be the primary driver of the final value.
The SM frequency associated with the de Broglie equation is the apparent inverse relationship:
f = mx·c2 / h
(units are 1/s)
However, if Cc is substituted for c2 the UT equation is:
f = fx·Cc / 2pi·fx·Cc·rx = 1 / (2pi·rx)
(here the units are 1/m)
Either de Broglie found a relationship that is not really a frequency, or a simple UT substitution of Cc for c2
cannot be performed on this equation.
To make an accurate determination we need to look at the de Broglie frequency equation in its
relativistic form:
f = mx·c2 / h · 1/(1-v2/c2).5
The de Broglie equation shows time and length contractions for particles traveling through space as a
function of mass. In UT, the frequency is used instead of the mass, which makes more sense. Since the only
change between the relativistic and non-relativistic versions in UT does not affect the units, is there another
way to formulate the equation that works? Yes, another velocity unit needs added to the numerator. UT’s
version of the above equation then becomes:
f = c2 / (2pi·Cc·rx) · 1/(1-v2/c2).5
or possibly;
f = c·(c+v) / (2pi·Cc·rx) · 1/(1-v2/c2).5 (if a larger value is needed).
Both the de Broglie and the UT forms of the relativistic portion of the equation now have the same units and
give approximately the same answers. The relationship of both c to v and c to Cc are what we are concerned
with here, and not v to Cc.
When making the calculations for frequency, a comparison of the de Broglie and UT values show
some interesting differences. First, the starting frequency for the SM version of the de Broglie equation
using a velocity of 1m/s is 1.23555E+20 Hz, and with a velocity equal to c-1m/s it is 1.51271E+24 Hz. If
the first UT equation above is used that contains c2 then the values are just 1.000056 times those of the
UT
113
SM’s values. If the second equation above is used that contains c·(c+v) then the values start with the same
factor of 1.000056 and progress to a factor of 1.333639 at 100000000m/s, and finally to a factor of
2.000112 for a c-1m/s velocity. The frequency differences between the SM and UT equations are small
enough that they may lie beyond what can be measured, so determining which version of the UT equation
might be correct may not be possible at this time.
It seems that what we have learned about de Broglie is that he expanded on Compton’s work to
include a frequency determination, as well as the effects of relativity on the motion of particles through
space, which is why his first equation is equal to Compton’s when v is equal to c. His only mistake may
have been to have an intermediate equation that is shown as independent of the relativistic features related
to velocity when in fact they cannot be separated as he indicated, especially if c·(c+v) is the correct usage
in the UT equations. We have also learned that the existence of the Cc velocity is also important to the study
of relativistic endeavors when calculating motions of matter particles through space, and not just to the
study of particle construction or particle force generation.
UT
114
Interlude
Most of the information contained within these later chapters is purely speculation based on the findings
given in the preceding chapters. It is an attempt to reconcile a wave-constructed microcosm with the
apparent solid macrocosm we observe around us. Because it has been proven that something resembling
stringwaves create particles, these waves must also create everything about the universe as a whole. The
intention of this part of the book is to attempt explanations for the existence of our universe, its large-scale
structures, basic forces, and some of its matter to spacetime relationships based on the concepts of ultrawave
theory. Many of the subjects discussed are just as mysterious today as when they were first discovered.
Speculating on their causes is nothing new, but with the additional insight that UT provides the information
in the following chapters may stimulate some new ideas about these old subjects.
The attempt was made in the first part of the book to make it clear that just because all of the
numbers seem to fit a particular pattern doesn’t mean that the pattern is correct. In the end everything put
forth to you about particle and force creation may have a slightly different explanation that was not even
considered. However, it is the mathematics that fits, and it can be used without regard to the explanations.
Just like quantum theory, if ultrawave theory makes problems easier to solve, and if it can make
some predictions that can be tested, then it is a worthwhile development. UT is most undoubtedly one of
those developments, especially since it can be applied to the universe as a whole, which will be explained in
this second part of the book. Because these ideas are merely suppositions at this point, there are no hard
mathematical proofs supplied. Some of the statements made in these later chapters are just opinion, and it
would not be surprising to learn that some of them are wrong. Experimentation, observations, and time will
determine how many are correct.
UT
115
Chapter 13
Creation
“Incidentally, disturbance from cosmic background radiation is something we have
all experienced. Tune your television to any channel it doesn't receive, and about 1
percent of the dancing static you see is accounted for by this ancient remnant of the
Big Bang. The next time you complain that there is nothing on, remember that you
can always watch the birth of the universe.” Bill Bryson (from the pre-digital TV era)
One of the major puzzles that concerned cosmologists in the middle part of the twentieth century was why
the universe appeared to be so uniform. Extrapolating backward to the original “big bang,” the universe
would have needed to be more nearly uniform than light speed interactions are able to produce. Clearly,
something more was needed to explain the extreme smoothness. In 1980 a theory was introduced by Alan
Guth to explain this phenomenon and to fix the Standard Model. The theory was referred to as “inflation”
and used the concept of negative vacuum energy as the propulsive force. Supposedly, the negative vacuum
energy would cause faster than normal expansion and thereby spread out the material universe artificially.
Although there is no evidence indicating that anything like negative vacuum energy exists, the inflation idea
was a good one. It provides a query point for asking what other mechanism may be responsible for creating
such a rapid expansion. One of the three main influences leading to the discovery of the Cc wave speed was
this need for rapid expansion of the infant universe.
According to the Standard Model, the big bang unleashed the entire universe from a hot and dense
singularity. The idea of a dimensionless point being able to contain all the matter in the universe is nearly
impossible to believe, but pretend for a moment that it is perfectly credible to think of it as being possible if
there is just a little physical extent involved. The creation mechanism could then be described as being like
a ball of tightly wound string, its layers separated by thin sheets, or the whole thing as being a
“protoparticle”. The idea of the existence of such a particle first appeared in an article by Georges Lamaître,
a Belgian mathematician and Catholic priest, in 1927. He called it a primeval atom.
Genesis of the big bang can be imagined as something bad happening to the protoparticle, causing it
to become unstable and to break apart. If the rotational velocity of the protoparticle is set equal to Cc, the
radial fracture velocity must then be equal to c. Destruction of the proto-particle creates tiny 2-dimensional
bits and pieces of string, and expanding 2-dimensional sheets that will interact with each other on larger and
larger scales as the entirety expands. In effect, the original tightly-packed 4-dimensional object lost
cohesion and created a loosely-packed 3D universe plus a time component; the time component being a
measure of the interaction of the 2D components.
There are many reasons imaginable for why the protoparticle exploded. It may have happened
during contact with another protoparticle, if there are other such objects in existence somewhere. Another
reason might be that the velocity of the protoparticle increased or decreased due to a change in its
surroundings, which caused it to break apart. It could have been caused by something as simple as a flaw in
UT
116
the original structure. Whatever the reason, the early rapid expansion of our universe can easily be
explained by using the W2 wave speed.
As was described earlier, the W2 wave can be thought of as a vibrating string wrapped around a
spinning disc. Being similar to the W2s charge wave of a particle, if the W2 wave were fixed at the poles of
a sphere, after completing one rotation, the next rotation would then cause a precession of one W1
wavelength in latitude and one W2 wavelength in longitude. This would cause the disc/string combination
to create a globe with discernible poles and a precessing spin. It would look similar to a spinning basketball
on someone’s finger. Two differences would be that it would slowly roll end over end while turning into the
spin, and most importantly, would be increasing the size of the sphere at each lap of the outermost wave.
Each revolution of the W2 wave would see an increase in diameter, determined by the W2 wave amplitude.
If during the initial period of creation, the W2 wave were not coupled to a W1 wave, then the W2
waveform would totally dominate the expansion rate. It would spiral outward at a much greater rate, even if
both wave types were generated at the same instant. The reason for this is the assumption of a small initial
diameter for the protoparticle. This means that the small diameter would allow the W2 wave to travel in a
circle in such a short time that it would be forced to overlap itself if something disturbed its usual motion,
creating a spiraling pinwheel. The offset distance would depend on whatever controlled the overlap of the
waves; wave amplitude, vibration, or possibly whatever it was that caused the fracture to occur. It would be
able to grow to a large diameter before the W1 wave moving at the speed of light could catch up. If a large
number of W2 waves were involved in the spiraling pattern, they would produce a sphere that the linear W1
waves could fill evenly. A necessity rather than a construct, this scenario gives the desired effect of smooth,
rapid, and uniform expansion.
At the end of the initial inflation, the expansion of the W2 wave sphere grows at a much slower rate
than the speed of light by the time the first W1 waves catches the W2 boundary. At the point the two
waveforms collide, the W1 waves rebound and severe mixing occurs between the two wave types. There are
still many more W1 waves following the first wave so that the boundary is pushed forward at roughly light
speed. The outward expansion is then controlled by the W1 waves. Gravity does not enter the picture until
3D objects have been created. The continuous production of overlapping W1 and W2 waves produced as
decoupled waves comprises the volume Einstein called spacetime. When also including all of the coupled
waves that exist as matter it is called the Universe.
The only major difference between UT and all previous theories of universal origin is that the
maximum speed within the universe is Cc rather than c; all other differences are minor in comparison.
Although this is a simple statement, it has far-reaching implications. The most important of these is the
incorporation of this velocity into all aspects of the creation of our material reality. Without this aspect of
matter, humanity is limited to the creation of other concepts, such as the multi-dimensional world of
superstring theory. A single wave with Cc velocity is obviously much easier to handle, both from a
mathematical and mental visualization standpoint, than are multiple physical dimensions greater than three,
or multiply interacting universes, as current theories suggest.
It is not clear what the initial size of the creating mechanism for the W1 and W2 waves might be.
For now let’s assume that the frequency of the W1 wave is equal to the electron mass, which is the smallest
UT
117
frequency known. Calculating the time required so that the W1 and W2 waves are at an equal radius from
the center, the equation for a spiral was used for the W2 wave and a simple linear time-distance relationship
was used for the W1 wave. When the resulting radii and time intervals of the two different waves coincide,
that will be the limit of inflation. The resulting radius at which the coincidence occurs is 2.730924E-22
meters.
The value of the radius was determined by equating the frequency times the speed of light with the
radius of a spiral, which is equal to the total length of the W2 wave divided by pi times the number of
rotations minus the initial radius:
c·f = LW2/((pi·N)-Ri)
The assumption was made that the W2 wave has a spiral frequency spacing equal to that of the W1 wave.
Obviously, the intercept radius will be either smaller or larger if this is not true. The reason this equation is
not in the first part of the book is that both the frequency and the initial radius are undetermined, therefore it
makes no sense to treat it other than as an exercise in how to go about determining a possible inflation
scenario. It is not important what the initial radius is, or if the spiral frequency equals the W1 frequency,
because the matter distribution is automatically smoothed just by the very act of having to fill the void
created by the W2 spiraling waves.
What if the spiraling motions of the W2 waves are how they always behave? Could it be that what
had been assumed as resolved about the shape of a particle looking like a torus may not be accurate after
all? This latest development sounds interesting, but there seems to be no way to create a consistent magnetic
moment using a spiraling W2 wave, unless it could be made to loop back and begin again, as the old 8-track
tapes used to do. This is a very unlikely scenario, but demonstrates further just how many possibilities there
are for particle construction. Having so many possibilities highlights the difficulty in having to evaluate a
lot of wrong ones to arrive at the truth.
It would be great to calculate how many W2 waves can be squeezed into the inflation sphere to see
if it is enough to supply all the visible matter in the universe. Sadly there is not enough information to make
an attempt at describing the size and shape of the W2 wave, or to have any hope of determining how long
the creating mechanism was in action. Without some hard evidence regarding how long the waves were
being manufactured based on a specific starting diameter, or the W2 wave size, there can be no progress in
this area. Assumptions can be made that are then fitted to the observations, but that merely emphasizes the
details picked for the assumptions and provides no new insights.
The size of the Standard Model inflation sphere—the actual shape does not have to be spherical—is
ambiguous and has been listed as anywhere from the size of an atom to that of a softball, which is about the
range of c versus Cc, making me think that the atom size is probably closer to reality. If we make an
assumption about what would happen if the rebounding of the W1 waves traveled back to the creating
protoparticle, then it can be imagined that the compacting W1 waves may have halted further production of
both the W1 and W2 waves. If this assumption is true, the total amount of wave production can be
determined by pinning down the W2 wave amplitude and the starting diameter of the protoparticle. As of
UT
118
now, the only thing that can be done is to try some numbers and see if it creates what we see today when we
look out into the cosmos.
Because particles are made from waves, the idea of a super-hot, super-dense singularity is not a
reality for the originator of the big bang. Instead, a 4-D protoparticle that is beyond the realm of things such
as temperature is the most likely possibility for the creating mechanism. The very definition of temperature
is the rate of vibration of atoms. The Standard Model predicts the temperature during creation to be in the
billions of degrees (temperature-scale independent). If we were to consider the vibration of the waves
generated by the protoparticle as an indication of temperature, then the frequency of the W2 wave should
have the necessary vibration rate to account for the proposed temperature.
It seems nearly certain that the expanding ball idea presented earlier is a real possibility for how our
universe actually behaves. The rate of expansion of a universe created in this manner depends on the speed,
wavelength and amplitude of the waves that create it. Even though the W1 waves can expand at the speed of
light, they must have been partially constrained by the initial impact with the W2 waves at the end of the
inflation period and so immediately lost some impetus. The final expansion rate may have been limited by
the Cc velocity in some as yet unidentified way.
If the expansion rate of the Universe were only controlled by the W2 waves, it would be necessary
for the wavelength and possibly even the amplitude of the W2 waves to increase incrementally if they
continued to travel around a circumference while still being able to produce a large expansion rate. If the
wavelength or amplitude can not change, then a continual slowdown represented by the continuously
growing circumference and fixed W2 wave speed would limit the universe to a very small size. Decoupling
and spreading the W2 waves as W2o waves appears to be a necessity to prevent this condition. The
multidirectional movement of W2o waves allows spacetime to grow in relation to the more reasonable W1
speed, while letting the W2-type waves remain consistent as far as frequency and amplitude are concerned.
The wavelength and speed of any decoupled W1 wave is unknown; therefore, the rate of expansion
of the universe is also unknown. However, it should be slower than, or at least no faster than the speed of
light. Gravity may or may not have a continual drag on the expansion rate. (The idea of inconstant gravity
will be dealt with in a later chapter.) Using the two criteria of W1 expansion and gravitational drag, we can
easily see that either our universe will expand forever at an increasingly slower rate based on gravitational
influence, or if gravity is not always a factor, it will continue to expand at a constant rate determined by the
W1o wave speed.
The recent evidence of a speed-up of universal expansion attributed to dark energy is contrary to the
UT view of expansion. If it is determined without a doubt that this is the case then the UT view is incorrect.
It is very possible, however, that the existence of dark energy will be shown as a misinterpretation of events
based on the misguided view of the SM. Once you finish reading the next few chapters, you will have
gained a different perspective about how our universe could operate.
So far, all of the focus has been on the origins of the Universe, looking to its future can also be fairly
clearly defined in UT. If gravity is found not to be constant, but is a relatively local phenomenon then the
future portends an initial slowing followed by a constant-velocity expansion. This assumes that the nature of
space, time and matter does not interfere with this expansion. There are several scenarios that would change
UT
119
this view to be more pessimistic. The first is that there may be a size limitation to the universe that allows
the W1o’s to decouple so completely from the W2o’s that it puts an end to spacetime as a medium. Our
universe could disappear when the waves reach a particular time/distance boundary. Fortunately, the
likelihood is that it would take an extraordinarily large amount of time, hopefully billions or trillions of
times longer than the universe has already existed, to reach such a state.
Another somewhat pessimistic view is based on the nature of space being composed of strings and
branes and therefore can only expand to the point where the branes become perfectly flat at all locations.
This would result in a behavior somewhat similar to a phase change for matter, such as water turning to ice.
All branes would lock into place after a rapid slowdown and the strings would be holding them together in a
kind of mesh of components. The process would be rather bizarre looking to a 3-dimensional observer, as
space twisted into a solid 4-dimensional fixed volume that froze time into the fourth fixed dimension.
To determine the shape of our universe, it is necessary to know if the protoparticle responsible for its
existence accelerated or decelerated with respect to the W waves being generated. The universe may have a
line of creation rather than a point of creation, which would produce shapes such as a cone, or a torus, or for
that matter any number of odd shapes. To the average person the shape of the universe can be considered a
sphere, since that is the shape that is observed when one looks out at the universe from the Earth on a dark
night. Since the human eye requires a certain amount of light to activate the receptors, there is a limitation
to the distance light is visible, meaning that people can only see what is within a sphere of a certain
distance. With telescopes that distance is much greater than the naked eye, but is still limited to the point in
time when light can be seen at all. We will probably never know for sure what the actual shape is, but then
what does it really matter except to satisfy our curiosity?
With UT as a guide, it is possible to predict how the formation of structures such as galaxies, galaxy
clusters and superclusters are formed. The initial expansion of the decomposing protoparticle will combine
W1 and W2 waves into the atomic component waves called W3’s and W4’s. These matter forming waves
quickly form particles, which are so energetic that the first objects to form, electrons, protons, and
everything in between, keep breaking themselves apart until the space between the components reaches a
specific distance. When that distance is reached, the components are able to settle into a stable formation,
which quickly permits the formation of the simplest atoms, hydrogen and helium. This is well known
because these two elements comprise most of the matter in intergalactic space in a mass ratio of about 3/4 to
1/4, respectively. The close proximity of the particles, atoms and free waves at this point creates
concentrated pockets of gravity in an evenly distributed pattern around the entire cosmos. Quickly these
gravity pockets will collect enough atoms to begin the formation of the first stars. Before that time, the close
proximity of particles and free waves do not permit the atoms to combine easily. It only makes sense that
the very high concentration of material available to form stars would insure that the first stars would be
extremely large. A high percentage of all matter could have been locked up inside these first stars.
All matter particles are alike in that their spin characteristics have been predetermined. That is, they
all have inherent spins because of the way they were created. If these spacetime entities were created from
waves with arbitrary spins, there would be the same amount of antimatter as matter. These two particle
types would destroy each other continually as each reformed from the basic wave constituents, making any
UT
120
solid objects nearly impossible. The great deal of matter in one spin orientation for each particle type
bolsters the conclusion that the universe must have been created with a distinctive spin preference that each
particle type (mass increment) falls into. No rotational bias means no material objects and no us. Since we
exist, we also know that this bias must exist. Particle physicists typically refer to this type of self-consistent
but lopsided condition as asymmetry. Usually, physicists prefer symmetry and want things to fall into a
symmetric pattern of operation. What this natural asymmetry of particle spin allows is the creation of
electrostatic charge, which is what draws all matter particles together to make atoms.
Particle spin direction must be such that it favors production of proton spin in one direction and
electron spin in the other. Evidently, the spin of the neutron is the same as that of the electron instead of the
proton. This implies that some neutral particles could spin the same direction as the proton. For the most
part only the existence of electrons and protons as antiparticles allows the formation of hydrogen atoms. It is
these differences between all antimatter particles that allow atoms to interact. If only one spin direction
were permitted, nothing would cause the production of electrons instead of positrons, and we would not
exist.
Because of the rotation of the creating protoparticle and the waves it produces, the first stars will
have a predetermined but slow spin, otherwise why would they need to spin at all? If the Universe were
truly as smooth as is believed then the rotations between material components would cancel out and no spin
should exist. Since the first stars to form are massive stars, they will quickly reach the supernova stage and
begin the process of matter redistribution. Black holes (BHs) will be formed during the supernova events
that signal the end of the short life cycle of these massive stars. Because of the extreme change in diameter,
the act of concentrating the remaining material into a BH will make the slow spin of the parent star become
a fast spin for the BH child. The large size of these stars should ensure that they all form extremely massive
BHs. The proximity of these BHs to the high concentration of matter still present will cause most of them to
grow enormously massive, and in the process would be observed as quasars from our distant past.
The many supernova blasts from the initial star formation pushes matter away from each star’s
center and toward the matter being expelled from other blasts. Uniformity of the universe is now subject to
more radical changes than could otherwise have been achieved through regular star creation. In essence, the
first supernovas provided the first real steps toward structural organization in the early universe. The
concentrations of material, now including heavier elements created by the supernovas begins new stellar
formation. Fewer enormous stars are created, and many smaller ones begin to appear. The mixing of the
remaining material causes the newly formed stars to have faster spin rates with more varied directivity, so
that all subsequently formed BHs will twist and warp spacetime in all directions, hiding the original spin
direction of the universe from view.
Presumably, the large bubble structures seen on the scale of hundreds of millions of light years are
the remnants of the initial star formation. The pushing of material and new stars toward each other by the
first supernova explosions is surely the initial step in making the first galaxies, which by default would lie
on the edges of these bubbles. Eventually these bubbles may collapse back onto their originating BH
remnants, leaving very little matter left that is still capable of supporting life in the universe. The “Great
UT
121
Attractor,” which appears to be pulling on our local group of galaxies, may be a super-massive BH begun
by a first generation BH.
Some questions about our universe’s origins are: What are the learnable characteristics of the object
that created the expansion of a wave-constructed universe? Was the object a W4 wave or some primordial
version of a W4? How big was it? What was the initial spin rate for the universe? Does the size of the
largest superclusters put a limit on the size of the protoparticle, or merely the size of the first stars? Now
let’s consider the implications of the answers, even if we do not yet know them. If some form of a W4
created our universe, how long did it take to stop producing waves? The very presumption that a
protoparticle existed presents the possibility of other universes being created in the same manner by other
such protoparticles. The only thing we can never know is what is happening outside our particular universe,
if anything. An educated guess would be that there are many coexisting universes, and possibly some
overlapping ones, and that the only distinction between them is the speed of their individual waves.
One possibility that is sure to exist if there are multiple universes is a mirror image or antimatter
dominant one. The consequences of an interaction between opposite ones with the same wavelengths would
be devastating, as the particles would all annihilate continuously, until some semblance of stability could be
achieved, or it may be that no stability could ever be reached. Luckily, the likelihood of having two such
universes in close proximity is remote; we should be safe from such a fate.
As you can see, the idea of the decomposition of a protoparticle is the most likely explanation for the
universe’s existence. Somehow we have ended up with different waveforms that have the wonderful ability
to create matter. Whatever the reason for our existence, whether accident or design, it will take a long time
and many observations to determine with any degree of certainty whether or not there is a reason. Curiosity
about why and how we arrived here as a species on a world full of similar species will always be a factor in
our behavior. With the fragility of our planet’s biosphere, which we are just now beginning to understand,
the hope is that the human race will be around long enough to find out at least one of the answers to the
questions posed above.
UT
122
Chapter 14
Electricity and Magnetism
“We believe that electricity exists, because the electric company keeps sending us
bills for it, but we cannot figure out how it travels inside wires.” Dave Barry
Light is included within the total package known as quantum electrodynamics, which lumps it with
electricity and magnetism, but this is not the case. Electricity has long been known to be the result of the
movement of electrons as particles, which is very different from spin-1 photon wave propagation. It is likely
that magnetism is something wholly different from either of them. Merely the fact that they are all created
with the same waves has allowed mathematical manipulation to include them all.
It had long been suspected that the dislocation of electrons due to specific orientation of atoms
within conducting materials was what allowed the electrons to jump from one atom to the next with relative
ease, and that may very well be the case. With an extrapolation to include nuclei particles other than the
proton and neutron, as well as the knowledge of particle behavior that ultrawave theory provides, new
insight into this phenomenon of electricity should be expected. For one thing, the much larger size of
protons compared to what was previously believed makes it easier to visualize how electrons can move
between atoms composed of particles at least as big as protons at cryogenic temperatures.
Even though some materials show good conductance, there is still some need for electrons to jump
gaps, which is measured as the resistance of a material. Pressure from incoming electrons forces the gap
jumping, just as flushing water from the tank to the bowl in a commode causes the water to exit the bowl
until it again reaches equilibrium. In effect, the electrons are bumped out of position and replaced by new
ones. Although there are other ways in which electrons can travel within matter, this is the most logical
possibility for how it is accomplished.
Conductivity—the measure of electron flow—is sensitive to temperature, and temperature is a
measure of the rate of vibration of atoms. As temperature goes up, the ability of electrons to move freely
becomes hindered so that resistance values go up. The total absence of any temperature component allows
most conductive materials to show little or no resistance to electricity. This absence of resistance to
electrical conduction is called superconductivity. The superconducting nature of some elements and
compounds occurs when its resistance essentially falls to zero and the conductivity becomes a maximum
based on the total number of electrons admitted per unit area. This condition usually takes place only at
temperatures inside a few degrees from absolute zero, but with certain elemental compounds, it can reach
more than 70° Celsius above absolute zero.
Quantum theory does not have a simple explanation for how matter can allow superconductivity
regardless of the temperature factor. The explanation for how matter creates superconductivity using
ultrawave principles is very straight forward and depends on electron configurations. UT proposes two
different ways in which this can be done, with a subgroup of two explanations of what may happen on a
detailed level. Both explanations involve the configuration of the matter itself. When the atoms of a material
are arranged in a geometrical pattern or lattice, and the pattern is tight enough and ordered in the proper
UT
123
manner, a natural pipeline is formed that is in synchronization with the vibration of the electrons that will be
responsible for the superconductivity. This does not mean that all electrons in a material are displaced; on
the contrary, only a small group will be. In fact, it may be that many of the original electrons making up the
atoms are responsible for the actual formation of the pipeline and must remain in place for it to occur.
Usually the electron vibration in a material is high because of the high temperatures of everyday
existence. The frigid cold of Antarctica compared to absolute zero is like the staggering difference between
room temperature and a burning house. Cooling the atoms down to very low temperatures is the only way to
produce the desired synchronization. As each electron is dropped into this synchronized area, it is propelled
effortlessly along the pipeline. The first configuration, then, is when a single element’s atoms slow
sufficiently and open a pathway that is large enough for an electron to travel through, as its size must be
smaller than the opening through which it is traveling. The second configuration deals with multiple
element compounds that have their atoms aligned in rows to produce similar openings that extend through
the entire material. The different materials tend to dampen each others’ motions, causing stability at a higher
temperature than single elements. Superconductivity seems to depend solely upon this ability to stabilize
elemental vibrations into specific spacing patterns.
So far it has been assumed that electrons in their natural state are required for superconductivity,
which can be considered the first subgroup explanation. The second subgroup explanation allows for the
possibility that the electrons change form when the material vibration falls below a certain threshold level.
The vibration rate of the parent material and the electrons themselves could reach a resonance state that
imparted just the amount of energy necessary to cause the electrons to be turned into photons. The photons
are then guided by the magnetic fields within the lattice configuration of the material. For a photon to be
controlled by a magnetic field, it must be configured as indicated in Chapter 11, with its magnetic field at
right angles to its electric field. Traveling unimpeded through the tunnels created by the atomic nuclei, they
can never leave the material until the material is grounded and the photons are again turned back into
electrons. The velocity of the flow should be much greater if photons are responsible for electric flow.
Two things can be inferred from the above supposition. The first is that measurement of the flow
velocity should tell us if electrons or photons comprise the superconducting stream, and the second is that
each element or compound would have its own vibration rate and hence its own photon frequency. Both of
these should be measurable. The first one is easy because photons travel at light speed, whereas electrons
are significantly slower. The second one is more difficult and depends on being able to measure the
frequency of the photons while still trapped in the parent matter. This experiment may not be possible at our
current level of technology. Still, if we can measure the velocity of superconductivity then we should have a
better understanding of the processes involved.
When temperatures approach absolute zero, another phenomenon can take place: many atoms of the
same element combine to form something that behaves as if it were one large atom. This state of matter is
called a Bose-Einstein condensate, after Satyendranath Bose and Albert Einstein, who jointly developed the
theory of its existence. Under UT, this is not a surprising outcome for matter cooled to such a low
temperature. The stability of a particle torus can be enhanced by cooling its vibrations so that a very large
torus might be built in increments of the individual nuclei of the element. It also makes sense that neutral
UT
124
particles would be the most likely candidates for producing this state of matter since the electrostatic force
would not interfere with the combining process.
Although some compounds become superconducting at somewhat higher temperatures than others
do, the temperatures are still below that which is conducive to everyday use. The challenge is to understand
the parameters of the pipeline effect and how it can be controlled. Materials that can handle a large flow of
electricity and remain stable at a normal Earth temperature range will transform the entire energy supply
system. Many new inventions will appear that were not possible before without great cost and/or
complication.
A Josephson junction represents another temperature-related electrical condition that makes perfect
sense when viewed from the ultrawave perspective. With a Josephson junction, a small electrical flow can
occur even through a normal insulator. Because all atomic particles are waves, the ability of the waves to
create situations where they can move their resonance bands from one location to another is not unexpected.
It may appear that an electron tunnels through a barrier, when in actuality it “sees” no barrier at all, just a
collection of other wave junctions that affect it in many ways, including allowing it to move its resonance
band beyond the point that would normally be considered the barrier. There are limits to electrons being
able to do this sort of traveling, which include the thickness of the barrier and the temperatures involved.
We could also fall back on the idea that some electrons can temporarily turn into photons, which
may find it easier to cross insulating barriers, but such conjecture is not necessary at this time. The point in
mentioning it is to recognize that there is more than one way to look at how effects like these can occur.
Prior to the introduction of ultrawaves, these effects were explained only by strange ideas like the
Heisenberg uncertainty principle, or the Copenhagen interpretation concerning wave/particle duality.
Magnetism is related to electricity in that each will produce the other when the motion of either is
initiated. For example, if a wire is moved across a permanent magnet, then a current will be produced in the
wire; conversely, if a current is applied to a wire, it will create a magnetic field until the current is stopped.
This is only possible for materials like iron, which can become magnetic but do not have to retain the
magnetism, or copper, which is an electrical conductor. Magnetism is related to the orientation of particles
within a material where the orientations of large numbers of particle tori are all in parallel planes. Since
electrons have the highest magnetic moment, they are the particles most easily affected by it.
It seems evident that electrostatic charge from the W2s waves and the magnetic moment of the W3
disc must interact in some way, since the W2s wave is attached in some manner to the torus of a particle.
Since only half the mass is indicated as existing within the torus, the remainder probably lies within the
charge shell. If some or all of the W2s wave loops can be deflected by the electro-magnetic interaction in
their travels around the circumference of the particle then they can essentially be ejected from the particle as
a much longer single loop. The resulting loop will then follow the same circular pattern that all the W waves
seem to follow until it then returns to the torus from whence it left. The greater the number of loops of
charge wave ejected from the torus the farther reaching the charge becomes.
UT
125
The magnetic field on the other hand, relies only on the number of particles involved in the addition
of magnetic field lines, and hence the strength of the field. The field lines themselves may consist of the
charge loops mentioned above, or they may merely be the 2D membranes that the magnetic moment and
mass spin are generated from as they are pulled into the torus and then exit again in creating a much larger
tours shape. It is not clear at this time if the waves combine into large waves, or if they just add numerically
when creating stronger fields. More research needs to be performed to determine which is correct.
When you ask any scientist what a magnetic field is, he or she cannot tell you. A field can be said to
exist without regard to the matter in the vicinity according to the SM, which is not possible in UT. There is
no common sense reasoning behind the idea of such a field in the SM, for any spatial area can exhibit the
properties of a field for no apparent reason other than its location. If you lay a strong bar magnet on a table
and then strew some iron filings about the table, they will line up in arcs that emanate from the two ends of
the magnet, showing the direction of the field lines. At least with ultrawaves, the “field” can be shown as
being composed of something, namely some form of W2 waves or membranes that define a set of
boundaries around matter particles.
The interesting thing about magnetism is that it is a local phenomenon. It follows the inverse square
law the same as gravity, which means that at twice the distance from the last measurement to the center of
the field, the magnetic force drops by the square root of the previous value. (This is an indication that the
volume of enclosed space is involved in the calculation.) Conceivably its reach could extend forever, but in
actuality there must be a limit based on the total number of atoms involved in creating the field. There
would also be a limit to the strength of the field when considering the volume of space taken up by that
number of atoms. The only way to increase the field would be to decrease the space between atoms or
particles, such as what happens during the creation of a neutron star or BH.
The most interesting thing about magnetism possibly being created by W2s waves is that it may be a
simple way to test for direct evidence of the propagation velocity of the W2-type waves traveling at the Cc
speed. If an experiment can be devised—it should be relatively straightforward—in such a way as to be able
to detect the speed of advancement of the magnetic field, it would be possible to determine its speed as
being c, or greater than c. All that has to be measured is the reaction time between a rotated magnetic field
and an object within that field. The UT theory of magnetism be created by W2 waves can be tested and
immediately proven false if the velocity turns out to be that of light. If UT is correct then the linear speed of
propagation of the field will be anywhere from Cc, to approximately Cc divided by 2pi. It seems as if
something like this should have been done already, but if the velocity is expected to be that of light then it
may not have been done yet. If it has been done, a result that seems as if it is an instantaneous reaction may
already be known. If the velocity is greater than c then it will take some refinements of the process to
determine the actual propagation velocity.
UT
126
Chapter 15
Space and Time
"In the theory of gravity, you can't really separate the structure of space and time
from the particles which are associated with the force of gravity [ such as gravitons].
The notion of a string is inseparable from the space and time in which it moves."
Michael Greene (co-inventor of super-string theory)
It seems evident to anyone familiar with relativity theory that space and time are related in some manner, or
else Einstein’s equations would not work as they do. Time dilation effects become significant when large
velocities through space occur. Presumably, it doesn’t matter if it is a slow acceleration over a long time
period or quick acceleration over a short time period to achieve a particular speed; they both produce equal
amounts of dilation per total time if the average velocity is the same. A physical contraction in the direction
of motion is another aspect to the time-dilation/velocity interchange. It was actually Lorentz who first
quantified this behavior; hence you will frequently hear about Lorentz contraction when relativity is being
used. To determine how space and time are related, it is necessary to determine what they are.
Time is obviously a rate of change of something, but what? Could it be space? If that is the case then
what is space? Space seems to be the most empty of places, but you can’t deny that it has extent; therefore it
must be made of something. For example, you can’t see any of the noble gases, but we know they are there.
They have properties that can be measured, such as pressure and mass. Does space have properties that can
be measured? Sure, they just aren’t ones that we are familiar with in our everyday existence, and so cannot
be fully conceptualized by the human brain. For simplification purposes, it will be asserted that time is the
rate of change of space being traveled through, so how can it be proven?
The only property of space that is easy to measure is the gravitational effect due to the presence of
matter. Einstein asserted that space was an object that could be curved, which is what gravity actually is
from his viewpoint. It seems more relevant to think of it as concentrated rather than curved, but even that is
probably not the true nature of the beast. It is likely that no one is truly capable of understanding it at our
current level of intellectual consciousness. It will likely take concepts that do not yet exist in normal human
experience to grasp it fully. Nevertheless, this apparent curvature or concentration of space can be measured
as gravitational acceleration.
There are actually more ways to look at gravity other than as a simple concentration of space. The
next chapter goes into one possible physical aspect of it in detail, but for the time being let’s pretend that
space is a simple object. We must first set some ground rules even if they are just proposals, so let us
assume that the following items are true. Branes are what give space its physical extent. Ultrawave strings
are what concentrate space as mass, which is revealed to us as gravity. And finally, that time is just the rate
of interaction between the strings and branes. It is easy to make statements without having solid evidence to
back them up. Since there is no simple physical evidence, some pretty good logic and reasoning based on
existing, albeit more complicated, evidence is needed to justify these postulates.
UT
127
Light travels at the same speed regardless of frequency, or the red shifting or blue shifting of that
frequency. Einstein gave this matter a lot of consideration, which he then transformed into his theory of
special relativity. Unfortunately, the result was to eliminate an absolute frame of reference, which causes a
lot of consternation to the average person who finds it hard to grasp the concept that each point in space and
time does not exist in the same way as his own. For the most part those differences are so small that it has
no discernable effect on everyday life. When those differences are great however, the effects that are
produced can boggle the mind.
The equivalence of photon propagation velocity regardless of frequency provides a clue that some
sort of medium is involved. It is as if photons cross each minimum section of space at the same rate
regardless of how compressed or expanded that space happens to be. Space may not be a medium like the
proposed Aether of yore, which would have simpler measurable effects, but spacetime does have some
physical characteristics. As the quote to this chapter implies, matter and space can be thought of as linked in
such a way as to never be able to completely separate the two. And, since light is at least partly made of
matter waves, it also cannot be completely disassociated from space.
An example that highlights how space and time are related can be found with the so-called “twin”
paradox. This is the result of what happens when one twin stays here on Earth and the other goes out in a
spaceship that accelerates to nearly light speed in a great arc and then decelerates, returning after about a
year. When the twin in the spaceship returns to Earth he finds that his twin, who stayed behind, along with
everyone else, has aged many more years than he. The conundrum is that any signals transferred between
the two locations while the spaceship is traveling shows that time is running slower in both directions and
not faster in one and slower in the other. This kind of oddity is what makes relativity so difficult for the
average person.
It is easily proven that the velocity and not the acceleration is what matters in these types of cases.
For example, if a spaceship accelerates at a gravity equivalent to Earth’s through empty space and another
produces the same acceleration but in the presence of a star that leaves the spaceship stationary, how is the
type of acceleration relevant when comparing the time being measured on the two spaceships. For all intents
and purposes it isn’t relevant, since the accelerations are the same they have no effect on time compared to
each other. The time measured on the stationary ship should be the same as that measured on Earth, all other
variables being ignored. Time measured on the moving ship will depend on what its average velocity is
compared to the stationary spaceship and the star with which it is being held in place. It is a matter of the
relative velocities through space and not to each other that is important, and typifies the difference between
general and special relativity.
Does acceleration have anything at all to do with time dilation? Yes, there is a difference between
differing gravitational accelerations. If two clocks are placed one at the bottom and one at the top of a tall
tower, the time rates for the two will be slightly different, due to the distance of the clocks from the Earth’s
center of gravitational pull. The good news is that at the modest differences likely to be encountered by
humans there is so little difference in time rates that they can nearly all be ignored. It is possible that near
extremely massive objects or very concentrated matter that the effects can become important, but these
UT
128
effects will be overshadowed by other concerns. Is there a difference between artificial and natural
gravitation? There shouldn’t be, but it would be an important finding if such a difference were found.
Matter also affects time dilation if it is spinning, through an effect known as frame dragging.
Essentially, space is pulled around to a certain extent by the motion of the spinning mass. There is at least
one experiment underway to prove this effect, the initial results were published in 2011 and are shown to
have been successful. It would have been utterly astounding to learn that it was proven wrong, as that
outcome would have forced a change to our views of space and time. Actually, what was surprising for me
was to learn the effect was not greater than anticipated; it was slightly below prediction.
Einstein’s theories of constant light speed and relative time are difficult to understand, because they
don’t fit with how we normally view the world. It is difficult to know how many physicists over the years
have completely understood these somewhat contradictory ideas, but it probably isn’t all that many. In the
latter half of the last century scientists made a big deal out of proving that relativity was correct by showing
that a clock traveling at a few hundred miles an hour on a jet experienced a different rate of time passage
than one on the Earth’s surface. A difference of only a fraction of a microsecond exists at these differences
in velocity, but it is enough to measure, and sometimes results are more important than understanding.
Between Earth’s one-gravity field and the weightlessness experienced in space by a geostationary
object, such as the International Space Station, that is traveling much faster, is there a difference? The
difference between special and general relativity will determine the answer. The final answer reveals the
most about how time dilation works from both relativistic viewpoints, but the answer won’t be revealed
until you’ve had the opportunity to answer it for yourself based on the following information.
Two examples will show how these various ideas are related and the implications that arise. Our first
example is on a scale we are more familiar with and should be easy to grasp. Imagine that the sun captures a
planet that is traveling in a direction opposite to that of how all the other planets rotate around it. We will
think of it as having the same orbit as Earth, but slightly tilted so that the two do not collide, plus we will
ignore any gravitational attraction between them. The Earth travels around the sun at about 66,700 miles per
hour, therefore the velocity difference between the Earth and the newly captured planet is twice that
number. Special relativity states that time dilation depends on the viewpoint, so time measured on each of
the two planets is only from its viewpoint, suggesting that each would see the others’ time as slower. If you
could have a viewpoint from the north pole of the sun the two would have equal time passage from this
vantage point, with both appearing slower than on the sun. But as the twin paradox shows, the time
measured on each planet is actually something quite different from these observations.
Ultrawave theory removes apparent time conundrums like these by showing how the relationships
should not be taken from each viewpoint, but should be seen as a complete system. An answer presents
itself by knowing what all three objects are doing relative to the fixed frame of the universe itself. If we
view the solar system from a fixed point in space that moves with the sun through the galaxy then we see
how it works. First, the sun is rotating on its axis at approximately 4187 miles per hour. Second, the Earth is
rotating around the sun in that same direction, therefore the Earth will have a time differential based on the
difference between the sun’s spin and the Earth’s orbital velocity. Third, the captured planet’s velocity is
opposite the sun’s spin and will show a time differential based on how its motion differs relative to the
UT
129
galaxy as compared to the Earth and sun. All three objects will measure time as indicated above, as long as
no external motions are introduced and contrary to those given.
An example of an external motion that would mitigate to some extent the measured values is if the
sun were spinning opposite to the direction of rotation of the galaxy. An even greater difference would
occur if the sun were rotating around the galaxy opposite to the galaxy’s spin. Plus, the galaxy itself has
been measured as traveling within the universe toward something called The Great Attractor and will have a
time dilation compared to it, as well as to the universe as a whole. These additional constraints point out
how the system viewpoint is the key and not the individual viewpoint. There won’t be any total difference
between using relativity and UT; it will only remove the unfortunate issue of having to explain why the
special relativity view shows all times as slower when moving away from each other and faster when
moving toward each other.
Our second example expands the stage somewhat and throws a wrench into the works. Two galaxies
of equal size are moving away from each other at constant velocity merely by the expansion of the universe.
The galaxies are oriented so that the spins of all its components coincides with the other, or as mirror image
galaxies. Clocks will measure time the same on any two mirror image planets regardless of the galaxy in
which they reside. From the special relativity view there is a difference due to the velocity of recession, so
both will see each others’ time as slower. Because UT sees the expansion of space as a non-time-affecting
event then both planets times are identical. Now imagine that there is a third object equidistant between
them and both galaxies are receding from it at the same velocity. What does this mean to how time is
measured on this third object? It depends on what the object is doing, and believe it or not on what the
object is doing compared to the rest of the universe. If it were moving relative to the galaxies only due to the
expansion of space then it would measure time slightly differently from the two planets whose motions are
more complex. The problem is that there is no way to communicate the relative times to each other without
introducing delays.
Supposedly, if the clock used for measuring time on this object felt a gravitational acceleration equal
to that of the two planets in the opposing galaxies then its time measurements would be the same, assuming
its motion keeps it exactly between the two at all times. It does not matter if it is a planet of equal gravity to
the other two planets, or if it is a spaceship traveling in a bisecting curve under an artificial acceleration that
is equal to that same gravity. Two caveats are that its distance between the two must remain identical, and
that it has no lateral velocity compared to the planets. In the case of a moving object we would have to
assume a curve that was equal to the radius of measurement points on the two planets, which means that the
planets would have to have north poles that faced each other. Then it would be possible to draw a straight
line through all three locations that will always remain straight as each object rotated.
Let’s pretend that the object is a spaceship and it is on a tether attached to another spaceship at the
opposite end of a circle that represents the measurement diameters traced out on the two planets. After a
time the two spaceships are allowed to travel in a gradually increasing circle, hence the straight line will
become bent in the middle. The initial velocity on each ship is such that it experiences an artificial gravity in
a radial direction, which we will designate as 1G, and is also the same for each of the two planets. A
condition of the two ships’ motion is that they remain equidistant from the two galaxies. The tether is
UT
130
lengthened and the ships accelerate until the velocity is such that the 1G artificial gravity is maintained.
Assuming the ships can be resupplied with fuel down the tether, the tether is repeatedly lengthened and the
ships keep accelerating until they have a forward velocity of .9c while still experiencing only the 1G radial
acceleration. The problem to solve is what happens to time as measured on each spaceship compared to
each other and to each planet?
Again, general and special relativity give varying answers depending on the viewpoint chosen.
Special relativity is the one that shows each others’ time as different from their own, which is always slower
since they are all receding from each other. It will still give the correct result for the velocity differences,
but they are calculated separately. Ultrawave theory on the other hand needs to define the whole system to
determine what the true rate of time dilation is for each of the components. Time rate measured on each ship
is seen the same as that of the other ship because their artificial gravities are the same and their relative
velocity is zero. Each planet sees both ships time as the same, with a rate of recession due to the expansion
of the universe and an additional dilation provided by the .9c velocity that has to be added on by general
relativity. The actual time dilation is much greater than it would be with just the recession. It does not
matter which direction the ships travel, either clockwise or counterclockwise around the tether point. What
is important is that the spaceships are moving through space relative to the planets. Even though the
spaceships are stationary relative to the tether and each other, they are not stationary relative to the universe
as a whole, or if you prefer, spacetime in general. Space is therefore something with a substantive nature
and cannot be disregarded when discussing relative velocities.
So, concerning the question posed earlier about whether or not the International Space Station
experiences a time difference compared to Earth, the answer depends on the sum of all of the parts of the
equation that determines the answer. Things to be taken into account for any such calculation are:
gravitational acceleration felt, whether from a mass or artificially from acceleration, and just as importantly,
the velocity relative to the universe as a whole. Both of these factors must be calculated separately. By the
way, the answer to the question is yes. And for the record, the same is true of people driving around in cars
compared to those sitting at home; it is just that the differences are too small to matter.
An astronomical event that can be used as an example to show how relativity can give a wrong
prediction is the supernova event that created the crab nebula. If one were to calculate the production of
light from the explosion of the star that created the Crab nebula using velocities that are a large fraction of
light speed, and tried to determine how long it should take for the light to diminish over time, it should take
about 200 years. Instead, observations recorded by people in 1064AD when the light first arrived recorded
that it took approximately one year. We know from measurements that the red shifted, blue shifted, and
regular light from the explosion all arrived at about the same time, but somehow the expected time delay did
not occur. Einstein’s proposal that the speed of light is constant regardless of the motion of the observer, or
in this case the motion of the emitter, may explain the fact that the frequency does not affect the velocity,
but it does not help in explaining the time discrepancy.
There are really only three possible reasons for the time discrepancy, even if there are many subreasons that fit within these three generalities. The first is that we only see the light from the explosion that
is coming in our direction. The second is that there is less light expelled than believed. And the final reason
UT
131
is that the time delay is non-existent for some reason, possibly pertaining to photons but not matter particles.
We can see that the first two reasons have nothing to do with relativity giving a wrong prediction, only the
third one does, and that is only if it does not apply to photons. So it was a little misleading to say that
relativity predictions were wrong, it is just that we can’t always rely on them to provide an accurate
prediction, since there are other factors that can cause observations to disagree with expectations.
Explanations for the first reason include physical processes, such as an inordinately large amount of
gas and dust created behind the initial photon producing material. This gas and dust would hide the light
from the far side of the explosion from our view. Another possibility is that photons become directional,
since we know so little about how mass waves, which light is a part, will be affected by such large
accelerations obtained by supernova explosions. Rings seen around recent supernovas like 1987A may be
evidence of such directionality. And finally, the compression of space may cause strange effects that we
cannot even imagine.
When the crab nebula’s star exploded with such force that it hurled parts of itself away at nearly
light speed, it also compressed the substance that makes up space in all directions surrounding the star. The
immediate area around the former location of the star becomes something of a space vacuum and will cause
a reverse shockwave (one collapsing onto the remainder of the star) that will help it to be compacted into
either a neutron star or a BH when the vacuum is refilled. It is unclear at present how much stretching and
compacting of space there is and how much it affects photon orientation, but both could be significant.
Think of the light output from the explosion in terms of intensity over time. If the explosion were a
normal one like we are familiar with in chemical explosions, observers would see a flash similar to what is
seen when an old style flash bulb is used at night. It would be of an expected intensity that built quickly then
faded over time until it was no longer visible. An exploding star on the other hand, may build up a photon
intensity that is much higher than expected. This intensity makes the star appear to be much brighter than it
actually is, and for a far shorter time period. We know this is possible because a supernova can outshine its
parent galaxy. Relativity theory explains part of this over-intensity with a compression of the events due to
time dilation, but it takes other factors such as space compression to handle the remainder.
Reasons for number two—less light being expelled—includes the idea that the material left after the
explosion absorbs the light. We know that a lot of heavier elements are created during a supernova
explosion, therefore the energy of conversion has to come from somewhere, and it may be that photons
supply a big part of it. It could also be explained if matter ejected by a supernova was super cooled due to
some unknown effect that matter experiences when it is accelerated so rapidly. Even if these arguments are
valid, they probably couldn’t account for the whole discrepancy.
Reason three, which is that time delays for photons are different than for normal matter particles, is
my favorite of the three. It might be completely wrong, but it could answer a lot of questions about
astronomical observations. Light behaves differently from other types of particles that use mass waves;
therefore it may not follow all the same rules. Because relativity equations give accurate results except
when v = c, we don’t know what happens when that condition is being approached. If we were to try to
measure time and length contractions for photons, we might find that equations based on c2 are not correct.
Relativistic equations use quantities such as (1-v2/c2).5. On the other hand, UT deals with the Cc velocity
UT
132
and may require a similar but less complicated equation, such as 1-v/Cc, or possibly one that is about the
same, such as (1-(v*c)/Cc).5.
Speed
Accepted SM Correction
UT Correction
299792457m/s
250000000m/s
25000000m/s
2500000m/s
25000m/s
2500m/s
8.16779153766243E-05
5.51900095095560E-01
9.97772218382425E-01
9.99977746751275E-01
9.99999997774700E-01
9.99999999965230E-01
9.99999996664359E-01
9.99999997218375E-01
9.99999999721838E-01
9.99999999972184E-01
9.99999999999722E-01
9.99999999999972E-01
250m/s
9.99999999999652E-01
9.99999999999997E-01
If the 1-v/Cc equation happens to be correct then the chart above shows a comparison between what
is currently believed to be the time correction factor and what might be the actual correction. The reason for
this information being in the second half of the book instead of the first is that there is no empirical basis for
using the 1-v/Cc equation; it is just a possible answer as to why light does not show the time dilation that is
expected.
The best thing about using an equation such as 1-v/Cc is that no zero occurs in the correction factor
when v is equal to c. When the velocity is equal to that of photons, the time delay correction factor for
relativistic equations rounds off to that of line one in the above chart. This is far from giving an answer that
is unusable like the SM does. Such a delay is less than two milliseconds per year. Is this approach right? It’s
hard to say, but it does give us an alternative that is worth exploring.
In all likelihood, the reason for the 199 year discrepancy is due to a combination of factors. Some
are probably those mentioned above, but some may be unknown at present. What is important is that there
are still mysteries that current theory does not answer correctly. Only new ideas, such as those offered by
ultrawave theory are capable of providing explanations on the scale needed to explain space and time
adequately.
UT
133
Chapter 16
Gravity
“We can lick gravity, but sometimes the paperwork is overwhelming.” Werner von
Braun
One aspect of nature that still eludes physicists is how to apply quantum theory to gravity. Combining
electro-magnetism with quantum particle behavior proved successful, but that was the end of the road. It is a
well known fact that when you try to combine quantum theory and gravity, infinities arise and the whole
process goes haywire. Einstein spent the later part of his life trying in vain to combine them. Except for
string theory and loop quantum gravity theory, it may as well be a dead issue.
Ultrawave theory proposes that magnetism and gravity are at least as close as magnetism and
electricity, and are related in the way that they are created, which is through the circular motion of the
waves that create each type of force. A close relationship between magnetism and gravity was supported
when it was proven that some BHs have extremely large magnetic fields associated with them. From an
ultrawave perspective, it is an indication that some form of matter is still present within them. It is assumed
that matter is in orbit around the BH and that certain interactions produce the magnetism, but it makes more
sense to think of the magnetic field as part of the BH, and not separate from it.
Astronomer Tom Van Flandern, creator of the organization Meta Research, published a paper in
Physics Letters A (#250:1-11 in 1998) about the speed of propagation of gravity observed in a complex
system containing neutron stars. His conclusion was that the propagation velocity was no less than 2E10
multiplied by c. This is about 667 times faster than the Cc velocity. From an ultrawave perspective there is
no reason for a velocity higher than Cc; therefore there must be a problem with the calculations unless
gravity is instantaneous, which is doubtful. The discrepancy could be due to underestimated frame dragging
effects, additional magnetic interactions, or some other item that could have skewed the data. In any event,
his determination for the speed of propagation seems much more logical to adjust, in whatever way seems
reasonable, to the much closer numerical value of Cc rather than to a ridiculously low value of c. If the
adjustment proves successful, it will be proof of the concept of gravitational attraction due to ultrawaves.
The mechanism that creates magnetic forces is intricately linked to the spin radius of particle tori, as
described previously. The mechanism that creates gravity is similar, but must be related in a way that
produces a much smaller interaction. An example of a much smaller size than particle spin is the amplitude
of W2 waves. If it is of a scale that is comparable to a particle’s apparent spin frequency then it could be
less than 4E-38 m, and that is just for an electron, heavier particles would have frequencies that are much
smaller. Another more pertinent example is particle volume compared to particle spin cross sectional area. It
is area that becomes important when dealing with gravity, as you will see in the equations to come.
It is not possible to reconcile ultrawaves with the curved spacetime of general relativity in a simple
and mathematically concise way, but from a Newtonian point of view it is simple to show how ultrawaves
are suitable to the task of making gravity. Newton already placed a value on the gravitational attraction of
UT
134
objects, but what is needed is to redefine this value based on how W2o waves interact with the W2 waves
inside particles. The work done to produce Newton’s gravitational constant G was partially responsible for
determining the current value of Cc.
Quantization of gravity is possible from a particle standpoint if the assumption is made that the units
that determine gravity have been misapplied in a similar manner to how Einstein applied a velocity squared
to particle energy. All of the same components that were used to define particles can then be used to define
G, but with different units than the accepted ones of m3/(kg*s2). The equation that determines G can be
determined with particle components arranged in a form that has canceling units:
G = ½·fx·rx·c/(8·AT)
Where the factor fx (frequency) times rx (spin radius) is the mass constant for all spin-1/2 matter particles,
which is equal to 1.180149505588E-51 meter-seconds, c is the speed of light, and AT is the surface area of
spin-1/2 matter, which is also constant at 6.626E-34 square meters.
In the last chapter, the ideas presented treated space in such a way that it can be compared to a
simple object like a container full of fluid. Therefore, gravitational fields can be compared to a drain in a
sink where an object such as a planet resides. As space gets “sucked in” it produces a “pull” on objects that
is then interpreted as a gravitational acceleration. This may be a simplistic view, but it is close enough in its
description of gravity’s behavior that it is a good place to start in trying to understand this mysterious force.
A theory that may figure into the ultrawave concept of gravity is called “Loop Quantum Gravity”
(LQG). It is somewhat akin to string theory but has ties to Einstein’s special relativity, and its intent is to
quantize gravity. This theory is likely to have helpful contributions for understanding how ultrawaves
produce the effects of gravity, and how the evolution of the cosmos will proceed.
Although individual ultrawave properties are unknown at present, their interaction characteristics
can still be interpreted physically. Ultrawave ideas can be applied to one of Galileo’s simple ideas about
gravitation that Newton later quantified, namely that all objects feel the same force of gravity, regardless of
the type of matter they contain. This was the main reason Einstein was able to assume that the curvature of
space was responsible, as it treats all matter as equal. From the UT perspective, Newton’s idea is based on
the assumption that all W waves of the same type are identical. We can infer this because all electrons are
identical, as are all protons, etc., as well as all energy particles of equal frequency. Since identical wave
propagation produces identical particles, it makes sense to think that their gravitational input is the same.
Although there are many atomic elements, ultrawaves dictate that they all be created with the same type of
W waves and in that respect are identical to each other as far as gravitational attraction is concerned.
Einstein’s proposal that space is curved by the mass present within it implies two things. The first is
that space is somehow a medium. There is nothing wrong with that idea; it is just that he did not attempt to
explain what a spacetime medium is, or to define its properties, other than its relationship to gravity. The
second thing is that mass is an inherent container for gravitational force, meaning that all matter has an
inherent mechanism for producing gravity. Einstein stated it like this: “Space tells matter how to move, and
matter tells space how to curve.” An alternative explanation to relativity that is more like Newton’s ideas,
UT
135
but still fulfills Einstein’s implications, is that some form of W2 wave is responsible for gravitational
effects. This is much simpler to explain than relativity theory in that it resembles the mechanism that creates
particles. Basically, gravity is created almost like magnetism but on a much larger and hence weaker scale.
Gravitational force increases as objects approach each other for the simple reason that there are more
waves to interact. Large numbers of W2 waves that are in some manner associated with equal numbers of
matter particles can interact with all particles that it encounters in its circular travel radius, or if it is a large
curvature, a nearly linear motion path, whichever one proves to be the correct mode of operation. More
waves can create a larger number of tugs on any individual particle; therefore, greater masses produce larger
gravitational pulls. Gravity is weak at large distances and strengthens as the distance decreases, exactly like
magnetism.
A question we have to ask ourselves is; why doesn’t gravity have north and south poles the same as
magnetism? It may simply be that gravity is attractive and not repulsive in its behavior. To accomplish this
feat merely requires that the direction of the wave spin be such that it creates a corkscrew effect that pulls
toward the particle from which it exudes. It could be the action of a single wave or the combined effect of
multiple waves. Whatever the basic cause for attraction, it must be that merely being generated by matter is
the reason for the attractive force. We know from the equation on the previous page that it has to do with the
compression of spacetime area, because of the use of AT as the denominator.
Figure 16.1 shows a cross-section of a few waves between two objects that could be used to
represent a gravitational field. The field orientation shown in the figure is similar to a pair of magnets.
Regardless of the orientation of the object its wave propagation looks the same in all directions, even though
only a small portion of the waves are shown, which makes it appear as if they are oriented in a particular
way. An implication of the arcing waves in Figure 16.1 is that a sidewise pull is needed to bring the two
objects together. There would be no cancellation of the forces if the pull is in the direction of curvature
regardless of the direction of motion, i.e. clockwise or counterclockwise, when equal numbers of waves are
expelled from both poles as shown. The force is merely the result of having a higher number of waves in the
direction of the object producing the waves. Omni directional wave propagation allows gravity to operate in
all directions, while also making it attractive and not repulsive in its nature.
Figure 16.2 shows the same two objects but with the wave motion being more of a straightforward,
line-of-sight interaction. This secondary proposal is just as easily imagined as being suited to providing a
force of the correct value, while scaling up and down with the compression or expansion of the masses
involved. Instead of north and south poles like a magnet, matter would present an external north and an
internal south pole. All gravity waves are randomly oriented so that each direction picked would show the
same repeating pattern, and since it is a single external pole it only presents an attractive force. The pull
could be either along the waves (stronger pull), or across the waves (weaker pull). Common sense tells us
that a linear configuration is more reasonable, since that concept is what we are most familiar with and can
visualize best.
UT
136
Figure 16.1
Figure 16.2
UT
137
As indicated in Figures 16.1 and 16.2, ultrawaves would create gravity by locally filling space with
some type of W2 waves that are rushing together with 2D branes to from matter, and with a concentration
proportional to the density of the matter. This also implies that there are waves that are projecting from the
matter that allows the process to repeat. In the projecting form they must not interact with matter; it is only
during the influx of the waves that they interact with other matter particles. In essence, this process can be
compared to 8-track tapes that have a continuous loop of tape that spools around from outside to inside.
There is another way to look at what happens with the wave/brane influx. If the branes are what are
trapped when forming particulate matter, and the waves somehow change how they present themselves, say
by changing from a gravity wave to a magnetic or electrostatic wave, then that would explain why gravity is
only attractive in its behavior. From such a perspective, gravity in the form of strings and branes is turned
into matter and energy and some of the mass/energy transmits outward in the form of photons or other
material to be absorbed by some other mass collection somewhere else. It is like a never ending cycle, at
least until so much matter collects that nothing can escape, which is what a BH is supposed to do. The
problem with that view is that nothing gets out that will then loop back in again. We know that magnetic
fields exist near BH’s, so it would appear that ultrawaves get out in 2D form. However you choose to look
at it when considering ultrawaves, the idea that all matter and energy end up in another universe or translate
across spacetime through wormholes is rather ludicrous. Ultrawaves can provide a picture of gravity that is
physical in origin just like it does with matter and energy.
The idea presented in Figure 16.2 opens up a slim possibility of antigravity if the force is generated
along the wave. It requires that identical antimatter be responsible for a repulsive force to its matter
opposite, such that the spin pushes instead of pulls. Experimentally speaking, at least under the ultrawave
theory paradigm, antigravity should be easy to prove or disprove. All that it takes is a spherical container
with a large void and no electrostatic or magnetic fields present, and located in a microgravity environment.
Fill the container with a loose distribution of antimatter. If antigravity is at work, the antimatter should
collect in the center of the void area rather than on the inside surface of the sphere. The experiment is easily
defined, but would be extremely difficult to accomplish. At present, there are too many problems in getting
enough antimatter into such a sphere then releasing it from confinement that there is no need to list them.
Not to mention getting it all into space where the experiment must be performed. Unless antiparticles can be
shown to have antigravitational properties, then the idea of antigravity can probably be discounted.
If the question of antimatter were to be answered, and it was shown that antimatter did have opposite
gravity, as well as opposite charge then it would answer a longstanding puzzle in astrophysics. The puzzle
is; where is all of the antimatter located? Rather than annihilating with a small amount of normal matter
remaining as currently believed, it could well be that the antimatter collected into its own galaxies full of
antimatter planets. Externally they would appear identical to our own, but internally all of the spins would
be opposite to those in our galaxy. This sounds like a good proposition, but it is highly unlikely that
antimatter will behave differently, since our galaxy is already composed of oppositely spinning matter as
proven in Chapter 5.
The distance gravitational waves travel from a parent object is dependent on the size and/or density
of the parent matter. Loosely accreted gas and dust typical of a nebula has a large volume but insubstantial
UT
138
density. On the other hand, neutron stars have small volumes but very high densities. As long as both
configurations contain the same mass, they will produce equal amounts of gravity, but the behavior of that
gravity will appear quite different depending on which one you are observing.
The overall behavior of gravity generated by ultrawaves separates it from how Einstein defined
gravity, and its differences can be described using the following examples. There is a small galaxy shaped
like a rotating pancake that has one hundred large stars, one thousand medium-sized stars of a combined
mass equal to the combined mass of the large stars, and one million small stars of an equal combined mass
to either the large or medium sized stars. The stars are all evenly distributed in space with orbits that keep
them stable and fairly close together. As a spaceship leaves the galaxy on a path that is perpendicular to the
pancake shape, instruments on board initially show that the gravity weakens based on the inverse square
law. At some point, however, the small stars will no longer exert any gravitational pull because their range
of influence has been exceeded, and the gravitational pull experienced will quickly be reduced by an easily
noticeable amount. As the spaceship continues to travel, the range limit of the medium-sized stars will
eventually be reached, and the remaining gravitational force will be reduced again. At some point, even the
largest stars will have no influence on the ship’s progress. If there is no other influence of mass around, and
even if there is no fuel left to propel the spaceship, it still continues at its present speed with no deceleration.
This is the prediction that ultrawave theory professes based on the circular motion of ultrawaves.
Recent astronomical evidence suggests that the expansion rate of the universe may be increasing
instead of decreasing. If this is truly the case, then our universe is in big trouble, for it will most likely
expand exponentially near the end of its life. As was explained above, a limit to gravitational influence may
be the actual explanation for this apparent speedup of galaxies. In addition to the weakening of gravity
based on distance, there is also the cutoff point to gravitational influence due to mass densities.
Astronomical observations may better be explained by the limited reach of gravity that is not able to hold
distant galaxies together, no longer slowing their relative velocities, and hence the expansion of the
Universe.
One consequence of the gravity limitation conjecture is that groups of masses that are isolated
gravitationally from other groups of masses will tend to migrate together more rapidly than would normally
be expected. Our local group of galaxies appears to be approaching an unseen mass that has been called the
Great Attractor. It is most likely a super massive BH that has consumed so much matter in its locale that
few stars are present to reveal its location. Our local group is being drawn to this BH with little or no
resistance supplied by any other galaxy groups simply by being separated from them by such large
distances.
Getting back to a solar system scale, let’s look at what happens when a large collection of atoms get
together; such as in a star several times larger than a star like our sun. Dispensing with the usual details of
how a star dies, let’s look at just the results of the death of such a star, or what is known as a neutron star.
The lower limit of mass needed to make a neutron star is about 1.2 solar masses. To create a neutron star, all
of the atoms are compressed into a space equal to just a few miles in diameter. Concentrating particles is
truly the same as concentrating gravity. So far, no one knows if there is an upper limit to the size of a
neutron star, but it is suspected to be about ten solar masses.
UT
139
What happens to matter particles under the conditions present in a neutron star according to UT?
Simply put, the pressure due to gravity because of the higher density of mass prevents the particles or atoms
from keeping their spherical shape. Eventually, the flattening of the sphere into an oblate shape forces the
charge wave to tilt towards the torus. At some point the charge wave completely switches to traveling along
or within the torus, which turns it into a type of neutral particle. So, the formation of a neutron star indicates
a pressure limit has been reached for a volume of matter near the core of a star that converts all particles of
matter to neutral particles—probably neutrons, as they are the smallest neutral particle—and remains
crushed under its own gravity. The neutron star is just that part that has passed the compression limit; the
remainder of the material is ejected. The reason neutron stars have such strong gravitational pulls is that the
compressed layers allow more waves to interact over shorter distances. Once everything has been crushed
down to these limits it cannot recover easily. Some material at the surface of the neutron star can still escape
by turning back into particles that include electrons, which then turn into photons.
If we were to continue adding matter to a neutron star the pressure would continue to grow until at
some point another change would take place. Once the forces on the matter particles become as great as the
strong nuclear force, the integrity of the particle tori will be overcome. The particle tori can no longer
support the pressure and the tori will become interlocking. They will continue to be compressed until all the
tori are interlocked. If more material is added, it will continue to compress everything spatially until a point
is reached that exceeds the ability of the W2 waves to create tori and the complete dissociation of all
particle tori occurs. It is my belief that the W2 waves begin to form a single torus and rapidly pull in all the
remaining tori segments. The result is one large torus with all the available mass now contained within it.
Current theory proposes that a limit of about ten solar masses is needed to produce a BH. As far as is
known, there is no upper limit to the size of a BH. The only thing that makes BHs black is the gravitational
pull, which must be great enough to produce an event horizon that is outside the radius of the actual matter
creating the gravitation. The event horizon is the point at which light cannot escape the gravitational pull.
Because no one suspected that it was possible to compress matter to an intermediate stage beyond the
supposed neutron state, only BHs are expected to be denser than neutron stars. If we designate the
secondary interlocking neutral particle state referred to above as still constituting a neutron star, whether it
is black or not, then there may not be any true single torus BHs with masses less than about one hundred
solar masses. This could be the reason for the dearth of BHs between galactic sized ones and those of about
one hundred or so solar masses.
With the limited knowledge of how matter is created, it has been the norm for misconceptions about
BHs to arise, such as the idea that there was a singularity at the core. Since matter is created from particles
that have a torus shape and are composed almost entirely of empty space, it is not difficult to imagine that
many of them can be packed into a very small volume. What we have learned from the previous paragraph
is that all of the stellar class BH’s are probably just bigger more tightly packed neutron stars. And, at the
other end of the spectrum, the super massive BHs found at galactic centers are each probably made up of a
single torus of W2 waves. The belief that a singularity exists in the core of a BH is unnecessary. Matter has
a natural shape that allows compaction that is independent of the size and shape of normal matter particles.
UT
140
Behavior of large BH’s is almost like that of particles, but not necessarily comparable to scaled up
spin-1/2 particles. Instead of keeping the surface area equal to the AT (formerly Planck’s) constant, there is
probably a balance of diameters for each of the torus’s two controlling diameters that permits the most mass
to be compressed into the least amount of space. Even the tremendous speed of the W2 wave is limited in
what it can keep stable without some adjustment of the torus shape to keep the overall diameter small. Is
there a secondary spin associated with this large torus just like the magnetic moment anomaly of particles?
It is not possible to determine this until a physical size and shape can be determined for a specific BH. A
measurement of mass versus size will tell us how much secondary spin there can be. Another way to look at
it is that a size/mass ratio will provide an AT constant for a particular mass BH. With several such
measurements across a broad range, it should be possible to determine if AT is truly a constant or if it is a
progressive value, or if BHs even resemble particles in this respect.
The existence of a large magnetic field around a BH indicates a behavior like a collection of
particles and could explain much of a neutron star or BH’s production of material jets issuing from the
poles. The simple fact that super massive BH’s show features such as large jets of material, and the presence
of magnetic fields lends credibility to the idea that matter still exists within a BH. If time dilation occurred
at the event horizon as the current SM equations suggest, then as soon as a singularity formed time would
stop completely. That obviously doesn’t happen or we would see a shell of material where the event horizon
is located. Something else that doesn’t make sense is the rotation of the supposed singularity. Rotation
should only apply to physical objects, but it still seems to be a part of any BH’s existence. These and other
unusual aspects of BH’s suggest that matter in some form or another exists within them regardless of what
Einstein and the SM suggests.
If it turns out that BH’s are actually single tori just like particles then their behavior may make more
sense. Matter is sucked into the BH around its mid-section, or in the plane of its rotation similar to how
particles have magnetic moments that form lines of force about the cross sections. It is the rotation direction
of the cross-section that generates either an attraction or repulsion depending on the matter/anti-matter
status of the particles being attracted or repelled. The electrostatic force along the axis of the torus will
attract like spinning matter and repel oppositely spinning matter. Depending on the axis rotation of the
matter it will be accelerated toward one or the other ends of the axis of the BH; therefore, both types of
matter will be ejected from one end of the BH axis. Of course, some matter will be absorbed by the BH.
The forces near a BH are so great that most of the fastest moving matter will be ripped to shreds and
ejected as photons of various frequencies. Cosmic rays are a good possibility for protons that have been
converted into photons, which then turn back into protons when they interact with Earth’s atmosphere. First
they are accelerated to near light speed, which makes them turn into photons. When they turn back into
protons they have all of that momentum still available. Cosmic rays can pack quite a whallop.
Quasars, which are like the opposite of BHs, create intense expulsions of light and matter, and seem
to have occurred in the early universe when matter was denser. It is suspected that they are indicators of the
behavior of the early BH’s that now reside rather quiescently at the center of galaxies. The expulsions may
well be an indication that our view is down through one of the poles of the BH where the matter and energy
are being ejected. If these assumptions are true, BHs are directly showing both magnetic and electrostatic
UT
141
behaviors, as well as just the expected gravitational attraction. When considering such behaviors, it seems
that there are indeed similarities between normal matter particles and BH’s.
A misunderstanding about BH’s that is more of a general misunderstanding by the uninformed is the
idea that something strange occurs at the event horizon. The event horizon is the distance from the center of
a concentrated mass where even light cannot escape the gravitational pull, and must orbit around an object,
hence making the BH perfectly black. Nothing else unusual happens there, and if the BH is large enough, or
if the mass is spread out enough then it wouldn’t even be noticed by a passing spaceship; it would just look
like an absence of light in the foreground. If an astronaut were to cross the event horizon, he or she would
not feel anything more unusual after crossing it than before crossing it. Unfortunately, most BHs have very
concentrated mass; therefore the tidal forces would be so great that a spaceship and astronauts would be
decomposed into individual atoms long before reaching the event horizon. Only the most basic particles
would be left by the time the event horizon was encountered. This feature alone would make most BHs
unwelcome places to visit.
If W2 waves are responsible for gravity then their large speed and short frequency makes it possible
to achieve high velocities with the approach of matter particles to each other. Our new understanding about
velocity and matter makes it unclear if the light speed limit truly exists near objects like BHs. It is the
relative motion between material objects through space that determines their velocities. Imagine there was a
non-rotating BH, and all matter in its vicinity was moving toward it. If W waves are responsible for gravity
then wouldn’t it be possible for matter to reach a velocity relative to the BH itself, or any external matter
that was stationary relative to the BH, at least equal to light speed? Does it matter inside an event horizon
how fast it is going as long as it is less than light speed relative to other nearby matter? And what about
matter on the opposite side of the BH, wouldn’t the combination of the two velocities be greater than light
speed? It depends on whether you view space as fixed, meaning that velocities are dependent on it, or if you
believe that space flows along with matter and energy. It makes one wonder if we truly know much at all
about what happens under these extreme conditions.
Relativity shows that truly odd behaviors exist, such as time dilation. However, do the nonsensical
things that are supposed to happen at the extremes actually occur? Humans have never observed such
occurrences. UT provides a way out by showing that the c velocity is not the true limit, thereby supplying a
kind of head room that relieves the awkwardness of the expectations at the extremes of the calculations. It is
not necessary to think that it takes an infinite amount of energy to accelerate matter to light speed; as long as
the contraction of space is proportional it can be achieved. Time does not have to stop when light speed is
attained, either the mass disintegrates or some other compensating factor is involved. As a system, the total
actual time can be defined even when the relative times become meaningless.
Gravity, or its inertial equivalent, can be considered as just a measure of the resistance to movement
of bound waves through a field of free waves and branes. The existence of free waves and branes
independent of material objects is already implied by the idea that inertia is equivalent to gravity. Only the
acceptance of the idea that matter has to move through something, in what is now considered empty space,
can one make sense of how acceleration substitutes for a collection of matter. If there were a section of
space that was devoid of influence by matter and energy, would acceleration still be possible? Yes, because
UT
142
space is made of strings and branes and has just as much of an effect on moving objects as does any
gravitational field of any planet or star. So why does matter not quickly decelerate, but instead only slows
down or changes direction in response to other matter in its vicinity? If anyone could answer that question
authoritatively then there would be no more unknowns about the structure of space. It is probably a time
related issue, where changes in 3D interactions with respect to time outside of the local gravity field are
what result in acceleration or deceleration.
It is possible to test gravity with regards to matter’s composition of wave types, assuming of course
that gravity turns out to be a wave interaction. An important characteristic that differentiates between
ultrawave types is the motion, which is linear for W3 waves and circular for W4 waves. W3 waves may
have less interaction possibilities with gravitational waves in their propagation direction, making the
gravimetric component less in one direction versus another. This should be observable in that neutrinos and
light would interact with gravity differently in directions perpendicular as opposed to parallel to their
direction of propagation. Also, since magnetism is generated by W2s waves, any magnetic orientation of a
material might show similar directional tendencies with regards to gravitational attraction. It is hard to
imagine that magnetized matter has never been tested against gravitational pull, so the very fact that no
information exists leads one to the belief that matter’s orientation has no effect on its gravitational
component, but unless it can be shown that it has been tested then it is still a possibility. If none of these
possibilities turn out to be true, gravity is then limited to being purely a contraction of the brane material
that comprises the Universe, which also confines its behavior to being only an attractive force.
Gravity behaves somewhat differently on the scale of galaxies, since a lot of the mass is
concentrated in a BH at the galaxy’s center. Discoveries made in late 2005 show that more stars in the
outskirts of galaxies follow the rotational pattern of their peers in the central region, effectively tripling the
size of each galaxy’s apparent mass content. The need to have dark matter fill this calculational void is not
necessary. Once the Cc or Cc/2pi propagation rate of gravity waves is taken into account, it can be shown
that this is a logical inevitability. Frame dragging on the scale of BH matter densities is probably much
higher than expected. It seems clear that the gravitational force exerted in the plane of a rotating galaxy has
a much greater effect on the stars in the galaxy than it does in the direction perpendicular to the plane. It is
not that the gravity is greater; it is just that the spin of the BH at its center, as well as the additional mass of
the stars that make up its disc, force a velocity around the center that is higher than expected. The magnetlike behavior of the BH forces the alignment of W waves into patterns that allows gravity to produce this
behavior. It is also likely that the size of the BH at the center of each galaxy determines the size of that
galaxy.
Most of the visible matter, which we see as galaxies, puts such a low density value on the universe
that it seems most of the mass must be missing. UT supports the position that rotating BH’s pull matter
along in a radial direction, and give the impression that some of the mass is missing. Combining this idea
with the idea that photons and neutrinos contain mass, which gives them a gravimetric component, negates
the question of whether or not a galaxy has missing mass. The number of neutrinos produced every second
within a galaxy is sufficient to provide a large and spread out mass component that can account for the odd
motions of the stars within the galaxy.
UT
143
The very fact that W waves create the universe and dictate its composition forces the ratio of
universe size to mass content, which is called omega, to be close to one. The actual volume versus matter
versus expansion rate becomes a non issue when seen from the perspective that all W waves have mass and
that gravity is not constant over all distances. The best assumption about the evolution of the universe that
can be made at this time is that space was more highly curved during the Big Bang era and omega was close
to a value of one. As more and more matter becomes separated beyond distances that gravity has an effect,
the value exceeds one and the universe becomes open. Time will prove whether this proposal is true, but if
gravity is shown to be inconstant then it has to be true.
UT
144
Chapter 17
Other Mysteries
“You should never bet against anything in science at odds of more than about 1012 to
1.” Ernest Rutherford
If you are still having problems visualizing how ultrawaves work, try this simple viewing aid. Look at the
video screen of any good quality computer monitor as it produces a two-dimensional image. It is hard to tell
that the image is not a real object, even if only two-dimensional; especially if you had never seen a good flat
screen monitor before. The W waves produce a similar effect, but they do it in three dimensions using two
two-dimensional ultrawaves oriented at right angles to one another. Because the speed of the W2 wave is so
great, it makes the solidity of matter appear much greater than is the reality, just as the speed of electrons
makes a video image seem real. Just because you can’t see or feel these 2D waves doesn’t mean they aren’t
there. Vibrations produced by sound waves, for instance, can be felt all through the body, even if they can’t
be perceived visually. It isn’t too difficult to imagine that W2o waves travel through space undetected when
alone, but create easily detected objects when combined into three dimensional constructs.
Just as two objects cannot occupy the same computer monitor location, no two material objects can
occupy the same space. For matter particles, the reason is due to pairs of 2D waves forming a type of
miniature universe. They essentially isolate themselves by sealing off a section of space-time. The total
effect is that they produce a shield, preventing like particles from approaching each other too closely and
hence occupying the same space-time location. The idea that photons are an exception to this is most likely
wrong. The configuration of photons makes them different enough from their particle parents to allow them
to bunch more tightly together and only appear as if they are in the same location at the same instant in time.
Light is not as different from matter as you have always been told.
The Casimir effect, a somewhat mysterious force predicted by Hendrik Casimir in 1948, was
successfully demonstrated to within 15% of its predicted value. It is based on the idea that uncharged metal
plates placed very close together in a vacuum will show an attraction to each other. The explanation is
complicated and involves the idea of virtual particles that supposedly exist in all areas of space. To simplify
it as much as possible, since every type of particle has a frequency, each one requires a particular amount of
space to achieve existence. Shortening the space between the plates to a value that is less than that required
for these particles to exist means that there will be very few between the plates. Virtual particles are
supposed to exert force when they appear, through whatever motion they might possess. In essence, it
would work just like having low pressure between the plates and high pressure outside the plates, which
would appear the same as a force pulling on the plates.
Casimir’s explanation would be correct if virtual particles existed at the all pervasive extent that
current physics demands, but they don’t. In my experience something that is 15% from exact is not very
close, so another explanation for this effect might fit better with an ultrawave idea. Ultrawave principles
UT
145
imply that stray W2s waves are present around all atoms and a force could be generated that would pull on
the plates when the two are brought into close proximity. W2s waves from one plate would extend into the
material of the other plate, generating a small electrostatic force. It is simple in theory, if not necessarily
easy in practice, to check which of these ideas best fits the measurements. If Casimir’s idea is correct, the
material and thickness of the plates should have no effect on the intensity of the force. On the other hand,
UT’s explanation would be sensitive to both the density of the material used, as well as its thickness. Any
results from tests that may or may not have been done yet can be used to decide between the competing
theories of the SM and UT.
Working with excited atoms in 1932, Enrico Fermi calculated that one atom decaying from its
excited state would release a photon that would then cause an identical atom to become excited. He
calculated that it would be at the speed of light between the two atoms, but Gerhard Hegerfeldt found that it
was not at light speed but instead appeared instantaneous. This may seem mysterious to Standard Model
physicists, but using the W2 wave speed it comes as no surprise.
As stated in earlier chapters, Heisenberg’s uncertainty, light’s behavior and other things seem to be
mysterious until random jumps at light speed and waves traveling at 8.936E+16 meters per second are
introduced to explain that these behaviors are not actually mysterious. All of these behaviors are expected
effects of the motion of the W waves and are completely natural. Some of the strange ideas that have
developed over the decades since particles were discovered can now be more easily understood as a side
effect of having waves that travel at an inordinately high velocity. Quantum entanglement, for example, is
much more easily believable if an unseen string can attach to separated particles and affect both at an
interaction speed that is not instantaneous, but may seem so at human measurement capabilities.
Voyager 10 and 11 are spacecraft that were launched in 1972 and 1973 respectively, and are now
leaving the solar system. There were trajectory issues that appeared during their outward treks that could not
be explained. They appeared to speed up by about a nanometer per second per second. While this is a tiny
value, it is large enough that the distance is now about a half million kilometers off the expected course.
This issue is likely to be an offshoot of the inconstant gravity issue dealt with in Chapter 16. The amount of
deceleration due to gravity from the sun is less than expected by SM physics, but is understandable with UT
compensation. It also may explain the Kuiper cliff, which is the point where it appears that there are no
longer any space rocks orbiting the sun. It was assumed that there must be a large planet out there that swept
all of this material away, but if none is ever found that only leaves inconstant gravity as a viable
explanation.
[An explanation was recently found for the Voyager 10 and 11 spacecraft anomalies that had to do
with accelerations due to photon expulsion. This is actually a good example of how expelling electron mass
in photon form produces an equal and opposite momentum reaction just like any massive particle would do
that was not converted to photons.]
UT
146
Cold fusion is the purported reproduction at room temperatures of the energy producing process that
powers stars. Martin Fleischmann and Stanley Pons called a press conference in 1989 to announce that they
were successful in producing cold fusion. Unfortunately, no one was ever able to reproduce their results.
Even if someone had reproduced the results, it would likely have proven that it was merely a highly
efficient chemical process. It takes lots of energy to induce the fusion of any nuclei and it is not likely to be
found in a table top device. Cold fusion cannot be ruled out entirely, simply for the reason that particles and
atoms of different masses can be in close contact, which can create a myriad of force transfers, especially if
there are radioactive elements involved.
There has been a renewed interest in cold fusion, since it was announced that a process using
ultrasonic sound waves might create the proper environment to allow fusion. While it might be possible to
actually get enough energy into a small enough space to create fusion, it isn’t going to allow production of
fusion energy on a scale that will be useful to humanity. This doesn’t mean we should abandon all research
in this area. Such research could lead to chemical processes that give us a means to create clean and cheap
energy along the lines of the proposed hydrogen economy that is supposed to wean us from oil and gas.
Hot fusion on the other hand, is more promising as a major energy producing technology. It uses
lasers to focus photon energy onto a small sphere of matter that is conducive to fusion. Laser induced hot
fusion has the necessary energy scale to produce nuclear fusion, but has been tough to implement. It is
doubtful that this will ever be a practical source of energy for the immediate future, but if enough progress
is made then it could be a source of clean power before the end of the 21st century.
The Golden Ratio is based on successive pairs of numbers in the Fibonacci sequence, which is a list
of numbers where the next number is the sum of the preceding two numbers. The numbers are 1, 1, 2, 3, 5,
8, 13, 21 etc. to infinity. The ratio of successive pairs of these numbers is called the golden section or GS,
and averages out to approximately 1.618033989. A related formula that applies to the Fibonacci sequence of
numbers is 1/GS = 1+GS. What seems amazing is that this number sequence finds itself embedded in many
aspects of nature, such as flowers, leaves, plants, animal shells, etc. Why does this number pattern appear so
often in nature? No one knows.
UT does not actually have a simple answer to this question either, but this subject was included for
two reasons. First, it is an example of one of nature’s fundamental features that mathematics highlights in a
way that is different from our tactile perceptions. Second, it is one of many numbers, such as pi that shows
not only the underlying quantum nature of our existence, but also the way life is ordered into various
patterns that fit mathematically odd units.
Plant and animal DNA has a double helix shape with repeating sequences of pairs of four different
nucleotides A, C, G & T. It looks sort of like a twisted ladder with the rungs being made of these
nucleotides. The sides of the ladder are sugar molecules on one side and phosphate molecules on the other
that alternate for each sequence used in creating a protein. It can take upwards of a million base pairs to
make a protein creating set of nucleotides. The interesting thing is that with the right set of raw atoms and
molecules a DNA strand can electro-statically attract this raw material together until it creates a replica of
itself. Life is purely the product of sets of molecules that we call genes being able to replicate themselves.
UT
147
All of these mathematical characteristics of nature carry through from the simplest particles of
matter made from string-waves up to the very complex structure of animals that have the ability to make
chambered shells that follow nearly perfect spiral shapes. Normally, nature seems so random and chaotic
that it usually makes us pause when we see something so ordered as a nautilus shell, or a honeycomb from a
beehive. You don’t need a particle accelerator or an electron microscope to see how quantum wave
propagation affects the world; all of these ordered behaviors do it on a grand scale.
Now that ultrawave theory can be used to describe how our universe operates, it is not too much of a
stretch to see that the idea of an anthropic principal of exacting tolerances is flawed. The strong anthropic
principal is the idea that somehow just the right conditions are present in the universe to allow life,
especially human life, to exist. The intimation is that if even one force were just a little different then we
would not exist. This idea only looks at the existing values without knowing why they exist as they do. The
logic falsely assumes that if one value changes, then the others do not compensate to make up for it, when in
fact they do.
Not to imply that other universes definitely exist, but if they do, any that have been created in a
similar manner to ours would show many of the same characteristics. Any speed of a wave in another
universe that we know in ours as c (the speed of light) will have its own Cc component that could be either
closer to, or farther from, being a power factor. It will not matter if that universe’s light speed is 186,000 or
286,000 miles per second, as long as it has an equivalent higher level velocity that is near to a power factor
of their light speed. The conditions that exist for that universe should be similar to ours where there will be
four different W1, W2, W3 and W4 waves, and these wave interactions will create a similar looking
environment to ours. Other universes will likely not be exactly like ours, only similar.
No longer is it necessary to think in terms of a slight change in a force such as electromagnetism
causing matter to evaporate and end all life. All forces are related to each other by simple rules that are
directly related to the proportional nature of the ingredients. In other words, the universe is the way it is
because matter is made the way it is, and there is nothing so special about it that changing one feature will
alter the outcome drastically. Don’t think that this diminishes the importance of the secondary interactions
that make life possible; on the contrary, it only serves to show how a complicated system can arise from
such simple basic principles, especially if there is a quantum aspect involved.
UT
148
Chapter 18
Life in Our Universe
“You cannot hope to build a better world without improving the individuals. To that
end each of us must work for our own improvement and at the same time share a
general responsibility for all humanity, our particular duty being to aid those to whom
we think we can be most useful.”
Marie Curie
The current state of our local portion of the universe and its immediate future is an issue that many people
think about these days. One aspect of its current state is its life content. The diversity of plant and animal
life we see around us every day is already overwhelming. How much more diverse can life be?
As far as we know, Earth is the only place where life exists. Fermi once commented, “If there is
other intelligent life in our galaxy, where are they?” That is a very good question. In ideal circumstances it
doesn’t take more than a million years or so for a species to populate a galaxy of medium size once they
develop space travel and can reach speeds of at least one-tenth that of light speed. Knowing that our galaxy
has been around for about 12 billion years suggests that such an occurrence has not yet happened. The
question is why?
There are only a few reasons why intelligent alien life has not visited us. The first reason is that
intelligent life is rarer than we imagine. Even if we find life on every suitable planet that we visit in the
future, it may not be intelligent life, or at least not intelligent technologically. Either life is abundant or it is
not, but assuming that it is doesn’t ensure that we will find any intelligent life. Intelligence may take longer
on average than can easily develop within the period that our galaxy has existed, except under the special
conditions that produced life here on Earth.
Life is complicated and fragile. It may be that the conditions that produced intelligent life here on
Earth are not easily duplicated, and that the number of planets with intelligent life is indeed extremely rare.
Not to mention the things that can go wrong, such as large comets or meteors wiping out entire species who
may have intelligence but who are not yet able to avoid such disasters as meteor impacts, global warming or
freezing, or any of a host of natural disasters. There have been many instances of life nearly being
eradicated from Earth, so those dangers must be a common occurrence. If something as simple as manmade
global warming can change the Earth so much that it becomes hostile to life, then the tolerance for earthlike
planets to develop intelligent life is very small indeed.
It takes food, water, and a relatively stable environment for life to exist. In all cases that means a
suitable planet that is the correct distance from a star very much like the sun. Larger stars would have a
wider tolerance band for the location of the planet, but they are less stable and usually not suitable. Smaller
stars would have smaller tolerance bands where the planet could be located, and although stable, they might
not allow planets of sufficient composition to form at the correct distance. Solar systems that produce life
would need to form under similar conditions, with nearly identical amounts of raw materials. What we may
UT
149
find is that ours had a perturbation that is not the norm and so produced a system unlike most others. Even if
there are other planets that have similar origins, including a moon, which could very will be a crucial
component, how many can there be that have survived intact for long enough to produce intelligence?
The second reason for no aliens having visited us so far has to do with practicality. It is extremely
dangerous to travel at high velocity through space. Shielding oneself from radiation and particles of matter
is difficult and costly. It would take an enormous amount of energy per each kilogram of transported mass
to travel between any two stars. More than likely we would not even see real alien beings arrive here, just
their mechanical surrogates who have been programmed to find raw materials and self-replicate while
exploring for signs of life. The amount of machine intelligence required to accomplish this task may lead to
a catch-22. If the machines are not intelligent enough, then they will bog down with ever-growing problems
until their progress is halted. If they are too intelligent, they may develop ideas of their own and not see the
need to fulfill the task of exploring the galaxy for their creators.
Reason number three is that technologically advanced civilizations tend to destroy themselves
through global warming, overpopulation, pollution, and war. Unfortunately, we seem to be headed for all of
these eventualities. One of our biggest problems is the need to be individuals while trying to relate to the
group as a whole, which makes it less likely to have everyone pulling for the same outcome on almost any
subject. Assuming that we find the resolve to overcome the problems associated with living peacefully with
nature and each other, we must still overcome other issues before attempting to colonize the entire galaxy.
Civilization from a human perspective is the trend of molding our surroundings into an orderly and
often useful conglomeration of manmade materials interwoven with natural plant growth. If the ability to
travel great distances remains confined to the level of our current scientific knowledge, why would we
expend such great amounts of energy? By maximizing remote mining operations and initiating cascading
collisions between astronomical bodies, the latter of which would bring outer system moons and asteroids
into the inner solar system where the ice can be turned into useful water, we can transform our entire solar
system so that it can support many quadrillions of people. Why leave this solar system when we have such a
good thing right in our own backyard?
The final reason is a simple matter of perspective. What we think of as important, another species
may not deem as such. Their need to explore may have been supplanted by other needs that we don’t
understand. It may be a matter of philosophical ideas, and could highlight just how different they are from
us. Their behavior may depend on whether they are open to communication or completely myopic; caring
only about what affects them. We don’t know if their goal is to join forces with other intelligent life,
conquer the galaxy and enslave others, or eradicate all life alien to themselves. There is no way to know
what an alien race will do until humankind comes face to face with them. We can only hope they will be
reasonably compassionate if they are more advanced than we.
What do aliens look like? There have been some discussions about the possibility of alien life being
based on silicon rather than carbon, but does it really make sense to skip a better element for a slightly
worse one; probably not. What about their physical appearance, how varied is that likely to be? Of all the
different types of life we have on our planet, most have developed along a bilateral arrangement, meaning
they have pairs of things like arms, legs, eyes, etc. Humans have good appendages for manipulating objects,
UT
150
so that is one feature of their construction that should be similar. They may have their nostrils above their
eyes, but their mouths would make more sense being below both, just as ours are, just like all Earth
mammals. Although it is possible, they aren’t likely to be closely related to sea animals or flying animals.
What they should more closely resemble are humans or primates, rather than tentacle laden blobs or insects.
DNA is a self-replicating molecule. There cannot be very many possible combinations of elements
that result in self-replicating molecules when there are such a limited set of elements associated with star
evolution. Most of the elements that abound on Earth will abound on other planets in other solar systems
and should produce similar molecular compounds, even if those compounds arrive via comets or asteroids.
Any life that evolves on other worlds will have to be based on these same elements and similar molecules.
Why should we then expect them to create such different looking species of plants and animals? It would be
less surprising to see aliens that looked like the altered humans on the early Star Trek series than it would
for one to look like an octopus.
The Mars missions in recent years have increased awareness of the most likely habitat in the solar
system, other than Earth, where primitive life may have evolved. We still have a need to send more mobile
exploring devices to Mars, as well as other destinations, in order to examine the largest number of locations
and objects. This is not only to prove whether or not life exists elsewhere, but also where best to spend
resources. We can ill afford to go blindly about, looking under every rock to see what’s there. Our resources
need to be allocated in the most appropriate manner based on the knowledge we have, so the first step is to
re-evaluate what we think we know. Only then can we as a people determine what the best course of action
is in finding out if life exists elsewhere in the solar system, let alone the galaxy, and what we want to do
with that knowledge.
The following may be a paranoid point of view, but it is always better to be safe than sorry. The best
offense is a good defense; therefore, we should first fortify our home system from unwanted attention from
those who might want to use it for their own purposes. Once we have done that, we can move on to the next
closest habitable system and fortify that one as well, continuing throughout our solar neighborhood, so that
we are always in the strongest possible position before moving on to the next one. Because the timeline to
complete such a task for just one solar system could be thousands of years, it could be a long time before
humans need to be concerned about physical encounters with aliens, hostile or otherwise. This is especially
true if they are proceeding in this same prudent manner.
UT
151
Chapter 19
The Future
“Prediction is always dangerous, especially when it is about the future.” Neils Bohr
A wealth of information may be obtainable if the free-roaming W waves can be isolated and examined.
Included in this information could be an almost exact age for the universe. This depends on our ability to
detect and measure the W1o and W2o waves for frequency, amplitude and curvature as they move
throughout the universe. The most significant fact about this new view of universal creation is the speed of
the W2 type waves. If W2o waves could somehow be modulated, messages could be sent at the Cc speed,
which equates to a linear velocity of at least Cc/2pi. This is a most exciting prospect, requiring many new
advances in science and technology. Some of you who understand the importance of this fact will
undoubtedly want to be a part of it.
The time delay of fifteen to twenty minutes at the speed of light from the Earth to Mars prevents
using remote controlled vehicles with any kind of usable feedback. If the development of communication at
the Cc velocity becomes possible, then it is no longer an obstacle. Sending remotely controlled mining
equipment to the asteroid belt would be possible, so that no humans would have to endure the long trip there
and back, not to mention the exposure to radiation and micrometeorites, both of which are unwelcome
aspects of space travel.
Because of the mixing of waves since the creation of our universe, the broadcast potential will
essentially be a sphere surrounding the modulation apparatus. It may be possible to focus the waves in
whatever direction is desired to increase the detectable signal, but it would require alignment of the matter
that is responsible for the wave creation. That is just one of the many details that will have to be overcome
to make ultrawave communication a reality. Devices that already exist that could help with these details are
particle accelerators. They can break down matter into its ultrawave state, so part of the work has already
been done. What is needed is a plan to understand how the W waves can be manipulated and apply that to
the design of the accelerators. Once we know the size limitations, we can determine if it is feasible to make
robotic machines with this type of communication device. If it is not possible to reduce the size down to
such a small footprint, we could at least send relay stations into the outer reaches of the solar system. From
there we could piggyback normal radio communications devices onto the robotic systems, which would still
give us near real-time remote control of operations throughout the solar system.
The SETI program (Search for Extra-Terrestrial Intelligence) is another area where ultrawaves
would be a huge help. It seems that if there are intelligent beings with advanced technologies living on other
worlds, they will not be looking for, or using, electromagnetic radiation (light and radio waves) as a search
for other intelligent life. What they will be using instead is a device that allows them to manipulate W2o
waves. This will allow communication with other species in a matter of hours, days, or weeks, rather than
the decades, centuries or millennia that is the limit to communication when using light speed devices.
Having such a means of communication is another important reason not to spend unneeded time and
UT
152
expense traveling outside the solar system. It would be like sending an important message from New York
to Tokyo by ancient sailing ship instead of doing it by satellite link.
A new century has just begun, and before it is over, we should find out if the manipulation of W2
waves is possible. Now that we know that communication at a speed much faster than that presented by
light may be possible, we should be expending our energies in that direction. If other intelligent species in
our galaxy have discovered the Cc wave speed, then there may be reason to believe that we could make our
first alien contact. Very few things could be considered more exciting than that prospect. It is possible that
they have learned how to do other particle manipulations, such as teleporting matter from one place to
another. Probably it is only simple elements that can be transported, but even that would be a huge leap in
the ability to colonize other planets in our solar system. Manipulation of W waves would also be a key
ingredient in facilitating galactic exploration.
Most people don’t take the time to realize that anyone physically leaving the solar system will likely
never be able to return, simply because it takes such a long time to get to any other star system and then to
come back. Even at the speed of light, communications with the nearest star systems would take so long that
it would likely not be practical to have a two-way system of communication. Anyone exiting our solar
system may as well be considered no longer part of humanity. They would essentially become the first alien
culture, as their needs and behaviors would quickly depart from how Earth cultures and behaviors will
evolve. Only a communication system capable of quick exchanges of information would prevent this type of
problem from occurring.
Quantum theory is the basis for many of the strange ideas seen in sci-fi movies and on television
shows where the nearly impossible seems routine. Just because our worldview will return to a more
classical cause and effect style of logic demonstrated by ultrawaves doesn’t mean that we have to abandon
all of those strange ideas. Except for time travel into the past, which isn’t possible under cause and effect
systems, about any physical system that follows these modified laws of physics can be considered possible.
With the simplification of matter presented to us by the understanding of the existence of ultrawaves, the
twenty-first century should prove to be even more technologically exciting than the twentieth.
Other advances will surely come from this new simplified view of reality. Particles that can be
described so easily by UT will lead to better explanations of how and why the elements behave as they do.
Once the construction of atoms has been well defined, molecules will be the next targets of simplification
and understanding, and soon we will be able to control chemical processes with better precision. The need
for trial and error experimentation will virtually be eliminated, except in those instances where little or no
data exists about certain isotopes. Medical advances on the gene level should be more easily achieved when
atoms and molecules are definitively described. Therefore, biological processes involving diseases will
likely be beneficiaries of the new understanding of material objects.
The idea that particles are simple creations with simple, understandable features was the initial
impetus for creating ultrawave theory. UT produces easily testable predictions and explains many heretofore
unexplainable phenomena. The number of new discoveries and explanations for existing features of our
universe will come with such rapidity that keeping up with the progress will tax even the most adamant
scientific followers. Being able to describe the universe so simply will take away much of the prevailing
UT
153
mysticism of science and place it more firmly within the grasp of all people, not just those few who know
the current esoteric mathematical languages. There is no reason that more people should not be able to
understand the way that nature creates matter and energy without having to be schooled for an inordinate
amount of time. No longer should it be necessary to learn a myriad of unusual concepts that aren’t related in
any way to the physical world that we inhabit every day.
As early as the end of the last century, some scientists were concerned that the end of all there was
to learn was nearing. An idea as simple as replacing the square of a velocity with a higher-level velocity has
given physics a new perspective on reality. If such a simple thing can be discovered after a hundred years of
rapidly advancing technology, what else might be out there still waiting to be discovered? In all fairness
there is a little more to it than just the velocity issue. The idea of the two-dimensional objects required to
make ultrawaves possible is going to be hard for some to grasp, but it shouldn’t be as difficult as
understanding relativity theory.
Whatever unknowns lie beyond our current understanding will be of a comparable paradigm shift to
that of ultrawaves over quantum theory. It will require not only the ability to come up with the ideas, but
also the ability to comprehend their implications. Considering that over two hundred years passed between
the development of Newtonian mechanics and that of relativity, but less than one hundred years elapsed
between relativity and ultrawaves, the next jump should occur about the middle of this century. Only our
children’s children may understand all of those ideas completely, as it requires an almost intuitive sense of
understanding that comes from learning complex subjects at an early age. We can only imagine what
strange new wonderful ideas will come from their minds.
Progress in intellectual endeavors and mental acuity may be intimately linked to environmental and
emotional welfare. People having to worry about their survival may stifle some of the creativity necessary
for continuation of our technological progress. While some of the global warming we are experiencing now
may be natural, much of it is due to the waste products given off by the human species. We must find ways
to use renewable energy to keep the planet as much in balance as possible and keep the greenhouse gases
responsible to as low a level as possible. It is imperative that we learn these lessons now, so that future
generations can enjoy a more relaxed lifestyle.
Limiting population growth is probably the biggest concern facing our species. Mark Twain was
quoted as saying; “You should buy land; they aren’t making any more of it”. This was very prophetic at that
time, as it is the most important factor in population growth. There is a limited supply of resources that is
nearing the point of no return in its ability for self renewal, especially in the seafood category. More effort
needs to be expended in finding ways to use what we have left to maximum benefit. Unless the human race
can commit to limiting their procreation to achieve a near constant population then we are probably doomed
to suffering a shortfall of resources that will lead to further strife and war. It could eventually lead to the
total collapse of society as we know it now.
The future is not any more ours than it is our children’s. We can no longer be selfish and not
concerned about what happens to future generations. Let’s try to leave the Earth a better place than what we
now see in our future; an Earth upon which it is worth living. A healthy Earth is likely to be the only one
that will allow humanity to survive. Our planet must be a place where the free flow of ideas and the capacity
UT
154
to innovate are encouraged and rewarded. This is especially true of the things that will make our resources
last, or in finding ways to make them renewable. We must be able to work together as a species for the good
of all of us, not just for now, but also far into the future. Right now this planet is our only home, so we must
take care of it. It must be a place we are proud of, and we should never have to feel embarrassed about
passing it on to our descendants.
UT
155
APPENDIX
UT
156
Table 1
List of applicable Physical Constants from NIST
(National Institute of Standards and Technology)
[Listed in ascending exponent order with their currently accepted units]
(Addition or subtraction of the numbers in parenthesis from the last two or three digits represents what is
called the standard deviation, or the particular constant’s total range of values.)
Planck constant
Electron mass
Muon mass
h
me
m
6.62606896(33)E-34
9.10938215(45)E-31
1.88353130(11)E-28
Joule-seconds
kilograms
kilograms
Atomic mass constant
Proton mass
Neutron mass
Tau mass
Deuteron mass
Deuteron magnetic moment
Helion mass
Triton mass
Nuclear magneton (proton)
mu
mp
mn
m
md
1.660538782(83)E-27
1.672621637(83)E-27
1.674927211(84)E-27
3.16777(52)E-27
3.34358320(17)E-27
4.33073465(11)E-27
5.00641192(25)E-27
5.00735588(25)E-27
5.05078324(13)E-27
kilograms
kilograms
kilograms
kilograms
kilograms
Joule/Tesla
kilograms
kilograms
Joule/Tesla
Alpha particle mass
Proton magnetic moment
Triton magnetic moment
Electron magnetic moment
Bohr magneton (electron)
Natural unit of time
Elementary charge
Electronvolt
Neutron Compton wavelgth
Proton Compton wavelgth
m
6.64465620(33)E-27
p
1.410606662(37)E-26
t
1.504609361(42)E-26
e
9.28476377(23)E-24
B
9.27400915(23)E-24
Tn
1.2880886570(18)E-21
e or q 1.602176487(40)E-19
eV
1.602176487(40)E-19
Cn 1.3195908951(20)E-15
Cp 1.3214098446(19)E-15
kilograms
Joule/Tesla
Joule/Tesla
Joule/Tesla
Joule/Tesla
seconds
Coulombs
Joules
meters
meters
Natural unit of energy
Natural unit of length
Electron Compton wavelgth
Electric constant
Bohr radius
Gravimetric constant
Magnetic constant
Fine structure constant
Acceleration of gravity
me·c2
C
C

a
G


gn
joules
meters
meters
Fermi/meter
meters
m3/kg·s2
Newtons/(Ampere-s2)
meters/second2
d
mh
m
N
8.187104138(41)E-14
3.8615926459(53)E-13
2.4263102175(33)E-12
8.854187817…E-12
5.2917720859(36)E-11
6.67428(67)E-11
1.2566370614E-6
7.2973525376(50)E-3
9.80665
UT
157
Rydberg Constant
Natural unit of velocity
R∞
c
1.0973731568527E+7
2.99792458E+8
per meter
meters/second
Coulomb’s constant
k
8.9875517874E+9
Joule-meter/c2
Specialized Ultrawave Theory Constants
W2 wave Incremental mass mi
3.7114010922E-26
Spin-½ particle constant R/r · x/m 1.0298721205E+14
(x is the magneton)
Primary unit of velocity
Cc
8.935915519742E+16
kg/m
A·m2/kg
meters/second
(Better mass measurement of the electron, proton and neutron will affect the values of many of the
preceding constants relating to particles; however, some constants are fixed, such as the speed of light, and
will not change unless the definition is changed.)
UT
158
Table 2
Atomic Radii
[Listed spin is for the major naturally occurring isotope or radioisotope; the radioisotope with the longest
half-life is the one used. UT calculated radii are for single spin-1/2 particles with ideal spin.]
Hydrogen
Helium
Lithium
Beryllium
Diatomic radius
.7413E-10 m
3.00E-10 m
3.039E-10 m
2.226E-10 m
Covalent radius
.37E-10 m
.32E-10 m
1.34E-10 m
.90E-10 m
UT radius
.2393E-10 m
.9502E-10 m
1.648E-10 m
2.139E-10 m
Spin
1/2
0
3/2
3/2
Boron
Carbon
Nitrogen
Oxygen
Fluorine
Neon
Sodium
Magnesium
Aluminum
Silicon
1.589E-10 m
1.426E-10 m
1.0976E-10 m
1.20741E-10 m
1.418E-10 m
3.13E-10 m
3.716E-10 m
3.197E-10 m
2.863E-10 m
2.352E-10 m
.82E-10 m
.77E-10 m
.75E-10 m
.73E-10 m
.71E-10 m
.69E-10 m
1.54E-10 m
1.30E-10 m
1.18E-10 m
1.11E-10 m
2.567E-10 m
2.851E-10 m
3.339E-10 m
3.798E-10 m
4.510E-10 m
4.790E-10 m
5.457E-10 m
5.770E-10 m
6.405E-10 m
6.667E-10 m
3/2
0
1
0
1/2
0
3/2
0
5/2
0
Phosphorus
Sulphur
Chlorine
Argon
Potassium
Calcium
Scandium
Titanium
Vanadium
2.21E-10 m
2.05E-10 m
1.891E-10 m
3.72E-10 m
4.544E-10 m
3.947E-10 m
3.212E-10 m
2.896E-10 m
2.622E-10 m
1.06E-10 m
1.02E-10 m
.99E-10 m
.97E-10 m
1.96E-10 m
1.74E-10 m
1.44E-10 m
1.36E-10 m
1.25E-10 m
7.353E-10 m
7.612E-10 m
8.416E-10 m
9.483E-10 m
9.281E-10 m
9.514E-10 m
1.067E-9 m
1.136E-9 m
1.209E-9 m
1/2
0
3/2
0
3/2
0
7/2
0
7/2
Chromium
Manganese
Iron
Cobalt
Nickel
Copper
Zinc
Gallium
Germanium
2.498E-10 m
2.731E-10 m
2.482E-10 m
2.506E-10 m
2.492E-10 m
2.556E-10 m
2.665E-10 m
2.442E-10 m
2.45E-10 m
1.27E-10 m
1.39E-10 m
1.25E-10 m
1.26E-10 m
1.21E-10 m
1.38E-10 m
1.31E-10 m
1.26E-10 m
1.22E-10 m
1.234E-9 m
1.304E-9 m
1.3257E-9 m
1.399E-9 m
1.393E-9 m
1.5085E-9 m
1.552E-9 m
1.655E-9 m
1.724E-9 m
0
5/2
0
7/2
0
3/2
0
3/2
0
UT
159
Arsenic
Selenium
2.49E-10 m
2.321E-10 m
1.19E-10 m
1.16E-10 m
1.7785E-9 m
1.8745E-9 m
3/2
0
Bromine
Krypton
Rubidium
Strontium
Yttrium
Zirconium
Niobium
Molybdenum
Technetium
2.284E-10 m
4.04E-10 m
4.95E-10 m
4.303E-10 m
3.551E-10 m
3.179E-10 m
2.858E-10 m
2.725E-10 m
2.703E-10 m
1.14E-10 m
1.10E-10 m
2.11E-10 m
1.92E-10 m
1.62E-10 m
1.48E-10 m
1.37E-10 m
1.45E-10 m
1.56E-10 m
1.897E-9 m
1.989E-9 m
2.029E-9 m
2.080E-9 m
2.1105E-9 m
2.1655E-9 m
2.2055E-9 m
2.2775E-9 m
2.326E-9 m
3/2
0
5/2
0
1/2
0
9/2
0
6
Rubidium
Rhodium
Palladium
Silver
Cadmium
Indium
Tin
Antimony
Tellurium
Iodine
2.65E-10 m
2.69E-10 m
2.751E-10 m
2.889E-10 m
2.979E-10 m
3.251E-10 m
2.81E-10 m
2.90E-10 m
2.864E-10 m
2.666E-10 m
1.26E-10 m
1.35E-10 m
1.31E-10 m
1.53E-10 m
1.48E-10 m
1.44E-10 m
1.41E-10 m
1.38E-10 m
1.35E-10 m
1.33E-10 m
2.399E-9 m
2.443E-9 m
2.526E-9 m
2.5606E-9 m
2.6685E-9 m
2.7256E-9 m
2.818E-9 m
2.8904E-9 m
3.029E-9m
3.0125E-9 m
0
1/2
0
1/2
0
9/2
0
5/2
0
5/2
Xenon
4.39E-10 m
Cesium
5.309E-10 m
Barium
4.347E-10 m
Lanthanum 3.739E-10 m
Cerium
3.65E-10 m
Praseodymium
3.64E-10 m
Neodymium 3.628E-10 m
Promethium no data
Samarium
3.579E-10 m
1.30E-10 m
2.25E-10 m
1.98E-10 m
1.69E-10 m
1.85E-10 m
1.85E-10 m
1.85E-10 m
1.85E-10 m
1.85E-10 m
3.1167E-9 m
3.155E-9 m
3.260E-9 m
3.2974E-9 m
3.3231E-9 m
3.3469E-9 m
3.4182E-9 m
3.44205E-9 m
3.6085E-9 m
0
7/2
0
7/2
0
5/2
0
5/2
0
Europium
Gadolinium
Terbium
Dysprosium
Holmium
Erbium
Thulium
Ytterbium
Lutetium
1.85E-10 m
1.80E-10 m
1.75E-10 m
1.75E-10 m
1.75E-10 m
1.75E-10 m
1.75E-10 m
1.75E-10 m
1.60E-10 m
3.6323E-9 m
3.7511E-9 m
3.7749E-9 m
3.8937E-9 m
3.9175E-9 m
3.9413E-9 m
4.0126E-9 m
4.1077E-9 m
4.1534E-9 m
5/2
0
3/2
0
7/2
0
1/2
0
7/2
3.989E-10 m
3.57E-10 m
3.525E-10 m
3.503E-10 m
3.486E-10 m
3.468E-10 m
3.447E-10 m
3.88E-10 m
3.435E-10 m
UT
160
Hafnium
Tantalum
3.127E-10 m
2.86E-10 m
1.50E-10 m
1.38E-10 m
4.237E-9 m
4.2954E-9 m
0
7/2
Tungsten
Rhenium
Osmium
Iridium
Platinum
Gold
Mercury
Thallium
Lead
2.741E-10 m
2.741E-10 m
2.675E-10 m
2.714E-10 m
2.775E-10 m
2.884E-10 m
3.005E-10 m
3.408E-10 m
3.50E-10 m
1.46E-10 m
1.59E-10 m
1.28E-10 m
1.37E-10 m
1.28E-10 m
1.44E-10 m
1.49E-10 m
1.48E-10 m
1.47E-10 m
4.364E-9 m
4.420E-9 m
4.516E-9 m
4.563E-9 m
4.631E-9 m
4.676E-9 m
4.762E-9 m
4.852E-9 m
4.919E-9 m
0
5/2
0
3/2
1/2
3/2
0
1/2
0
Bismuth
3.09E-10 m
1.46E-10 m
4.961E-9 m
9/2
Download