Algorithm of Truth
Abstract
When a new theory harmonizes all known facts and successfully resolves the paradoxes and crises of the previous dominant model, it has effectively superseded it. Such paradigm shifts often render the vocabulary of the old theory obsolete. This is the case with Extended Galilean-Newtonian Mechanics (EGNM). EGNM introduces a working mechanical model of the universe that aligns with all observational data, is internally consistent, and mathematically rigorous.
To assist the reader in verifying these claims, we provide seven "No-Escape" AI protocols in the footnotes. These prompts allow for real-time validation of the empirical inconsistencies within the current standard model of cosmology, a model in “Crisis!” Since, if the pillars of the current paradigm (such as the Invariant speed of light \(c\), Dark Energy, and actual infinities) are falsified, the framework itself has collapsed. Below, select the footnote, and copy and paste the full script into your favourite AI. Confirm the results at the bottom of our prompts with the ones you produced yourself!
- Explains why there are no Actual Infinities.
- Confirms key details from the data of Initial Conditions
- Explains the significance of the energy range of the EM spectrum.
- How the Sagnac Effect experiment falsifies General Relativity.
- How light is proven to follow simple velocity addition, like all other entities in the universe.
- Empirical evidence for a decelerating expansion corrects a previous Nobel Prize winning mistake, falsifying Dark Energy and the Cosmological Constant (Λ).
- What does Spacetime and the Cosmic Web having the same 3D geometry mean about 4D Spacetime?
The surprising prompt results are the answer as to “why?” this white paper exists. For if there are no actual infinities, the universe must be finite. Doesn’t the "The Sagnac Effect" falsifying the universal speed of light (\(c\)) for all observers, debunk Einstein’s theory of General Relativity? If the decelerating expansion of our universe falsifies both the cosmological constant and cosmological principle, does that not mean General Relativity has no legs left to stand on? This fact-based exposition explains all this - and more.
BACKGROUND
INTRODUCTION: The Mechanical Extension
Newtonian mechanics is unrivalled in everyday motion but has historically diverged with experimental results at high velocities and in regions of extreme gravity. This raises a fundamental question: Can the Galilean and Newtonian frameworks of absolute Time and Space be extended to explain these high-energy phenomena? This paper demonstrates that they can. By identifying the geometric role of the Fine Structure Constant and the macroscopic quantum behavior of the Cosmic Web, we begin to build a new, cohesive mechanical extension of the classical model.
Understanding the Role of the Fine Structure Constant (𝛼)
The Fine Structure Constant, also called Alpha (\(𝛼 ≈ 1/137\)) is a dimensionless, geometric proportionality constant. It is the fundamental link that balances disparate domains of physics. Because 𝛼 is a ratio independent of unit systems, it points to an underlying mechanical structure. By rearranging the constants within the equation for 𝛼, we derive the exact velocity of light (\(c\)): 299,792,458 m/s. This is not a "magic" limit, but the maximum speed of electromagnetic propagation defined by the 𝛼 blueprint. Its formula is found below:
The Properties of Left Handed Matter ...
Of the three fundamental forces of nature, electromagnetism and the strong nuclear force are symmetrical, while the weak nuclear force is asymmetrical, producing exclusively left-handed particles. Conversely, right-handed normal matter particles are "sterile" to this force. In the normal matter domain, decoherence (the transition from quantum to classical states) is driven by the weak force and electromagnetism. However, a substance possessing a right-handed asymmetry that is non-electromagnetic would be immune to decoherence. That means it could remain in quantum states (invisible) at sizes larger than atoms. This macro-quantum state means we could have quantum objects that are the size of cars, houses, skyscrapers, mountains, planets, galaxies etc.
And Right Handed Matter
Interestingly, Alpha (𝛼) is the Mechanical Threshold that allows atoms to exist: without 𝛼, there is no electromagnetic interaction. We know that the dark matter Cosmic Web (the largest structure in the universe) is non-electromagnetic, non-decaying, and conserved. This raises the intriguing possibility that perhaps the difference between normal and dark matter is merely the presence or absence of 𝛼. Is normal matter simply dark matter scaled by 𝛼? If 𝛼 is the proportionality blueprint for atoms, it must be scaling them from a pre-existing source. Could that source be dark matter and how can we be sure? Fortunately, there is an empirical, mathematically robust method to establish just that: Noether’s theorem!
Emmy Noether & The New Universal Symmetry
Noether’s Theorem proves that every fundamental conservation law corresponds to a universal symmetry. Since the Cosmic Web represents a second category of conserved matter, we must recalibrate our understanding of thermodynamics. The First Law, which states the energy of the universe is constant, has historically only ever accounted for the normal matter domain.
By recognizing these two domains together, we now define a new Universal Conservation Value. Mathematically, this necessitates the discovery of a new, as yet, hidden universal symmetry as defined in points one and two below:
- Normal Matter Domain: Defined by left-handed decay and electromagnetic decoherence.
- Dark Matter Domain: Defined by right-handed asymmetry and macro-quantum coherence.
Noether’s Theorem provides the independent confirmation we were looking for. The missing symmetry within the normal matter domain is perfectly balanced by the right-handed, non-decaying nature of the dark matter domain. Alpha is the one-way scaling factor that mediates between them. It is important to understand what that means. Without 𝛼, the universe would not be devoid of matter; it would just all be “Dark.”
That means dark matter (so named because its qualities were unknown), can now officially be renamed to: “Proto-matter,” (meaning “original” matter) since not only are its properties now understood, but they clarify that it isn't "exotic” normal matter, or a non self-interfering diffuse gas, but the foundational state of all matter. Put another way: dark matter is the source of normal matter. Thus the proto-matter Cosmic Web exists in a macro-scale quantum coherent state, acting as the one-piece scaffolding upon which all luminous normal matter sits.
Galilean & Newtonian Mechanics
Galileo Galilei’s physics was rooted in the intuitive notions of absolute time and space, governed by simple velocity addition (\(c \pm v\)). If you are on a train moving at 50 km/h (\(v\)) and you throw a ball forward at 10 km/h (\(c\)), someone standing on the ground sees the ball moving at 60 km/h (\(c + v\)). That is Simple Velocity Addition. In this framework, the speed of light is not a universal constant; an observer moving toward a light source would measure a velocity of \(c\) - \(v\); and an observer moving in the same direction as the light would measure a total value of \(c\) + \(v\). Both observers (in the same location) would agree on the time of day, since time is absolute, but the speed of light would be a variable that depends on each observer's absolute motion.
The speed of light being variable doesn't mean that light is going faster in some circumstances and slower in others. It means its effective velocity is dependent on the motion of the observer '\(c\)' \(+\) '\(v\)' or ‘\(c\)’ \(-\) \(v\)’. To see how, replace the ball with light from a torch. The effective of that light would be the speed of the train (\(v\)) \(+\) the speed of light (\(c\)) would be 300,050 km/h. Newton agreed and incorporated these principles into his theories of motion and gravity, correctly presuming they applied to all entities, including light.
Michael Faraday vs. James Clerk Maxwell
The departure from this mechanical clarity began with James Clerk Maxwell. Michael Faraday, the pioneer of electromagnetic theory, defined light as a disturbance along physical, invisible “lines of force.” He strongly rejected the need for a medium, the much revered “Aether” of his day. However, Maxwell, who formalized Faraday’s insights mathematically, was a strong proponent of the luminiferous aether. He thus attributed the electromagnetic dynamics that Faraday experimentally defined as properties of electric and magnetic "lines of force" to the aether itself. He called them permittivity and permeability, respectively.
This shifted the speed of light from a variable vector to a universal constant - making it the same for all observers regardless of their relative motion to it. Albert Einstein later extended this by conceptually merging Space and Time into a single, relative four-dimensional entity: 4D Spacetime. He published his theories of special and general relativity in 1905 and 1915, respectively. However, since Galileo and Newton relied exclusively on empirical evidence, we must ask: how would their mechanical framework have accommodated modern data regarding high-velocity particles?
Nature’s Two Energy Limits Give us a Clue: The Principle of Parsimony
It is obvious that matter does not require the modulating effect of an external factor (Spacetime) to internally self regulate, since it has two such limits and how it handles the one will inform how it handles the other. At the lower thermal limit (0 K), the energy required to reach absolute zero becomes "infinitely great.” This ensures no physical entity can reach absolute zero (0 K), as that would imply matter without energy - a physical contradiction. Following Newton’s Principle of Parsimony - which dictates that we assign the same causes to the same natural effects - we must conclude that matter regulates its higher velocity limit (\(c\)) through the same internal energy dynamics.
Extending the Galilean-Newtonian Mechanical Framework: Exponential Factors
Einstein’s \(E = mc^{2}\) demonstrates that mass and energy both possess inertia. As an object exceeds velocities \(≥0.5c\), the energy used for acceleration doesn’t just affect momentum, it also physically becomes part of the object’s total energy content, effectively doubling its inertia. We are then presented with two distinct types of inertia: the original Rest Mass Inertia (\(m_{rest}\)) and the new Kinetic Inertia (\(m_{kinetic}\)).
Geometric Resolution and the Omega Factor (Ω)
Because these two values represent momenta arising from different causes, mathematically they possess a relationship of Orthogonality (visualize the two sides forming the right angle of a triangle not being the same). Orthogonal values cannot be added linearly (\(3 + 4 = 7\)); they must be resolved geometrically using the Pythagorean theorem (\(3^{2} + 4^{2} = 5^{2}\)) compute the resulting value - or hypotenuse.. This "repeated multiplication" or exponential behavior accurately resolves the non-linear resistance to acceleration observed as objects approach \(c\). Geometric resolution uses the relationship between the components of a shape to solve real world problems that map to the properties of that shape (orthogonality) either in direction, or in type. In the case of a boat crossing a river with a strong current, the orthogonality is in direction. In the case of a inertial values, the orthogonality is define by type since the both inertial factors are traveling in the same direction, but are of different character. Figure 1 to defines the Pythagorean Theorem. Figure 2 applies it to what we will henceforth call Exponential Dynamics.
This shape-based (geometric) resolution explains the non-linear increase in resistance to acceleration as an object approaches \(c\), all within a mechanical framework of absolute space and time. It explains real world behaviours that objects display only after reaching and surpassing the threshold of \(≥0.5c\). See Figures 3 and 4.
- VS -
Omega (Ω) is a Scaling Factor
In truth, the introduction of the Pythagorean principle to account for the second inertial value must be as a scaling factor for the following reasons. Its value - unlike with rest mass - is not constant, but varies from being negligible at velocities below \(0.5c\) to having a duplicating effect at about \(86.6\)% of the speed of light (\(v ≈ 0.866c\)) and more as the speed keeps rising but with sharp non-linearity as it approaches, but never quite reaches \(c\) (\(≈0.99c\)). It is for this reason that the classic Newtonian formula is accurate for everyday circumstances, but falls short once velocities reach the \(≥0.5c\) threshold.
We identify this scaling factor as Omega (Ω). While its mathematical form is identical to the Lorentz factor (γ), its explanation is radically different. In Extended Galilean-Newtonian Mechanics (EGNM), we do not invoke "Relativistic Effects." Instead, we identify these behaviors as Exponential Dynamics. This is because they are not caused by an orthogonal relationship between Space and Time in a theoretical 4D Spacetime; but by a second inertial factor (exponent) becoming relevant at the threshold velocity of \(0.5c\). By appending the Omega factor to Newton’s formula (\(p = mvΩ\)), the extended Galilean-Newtonian framework accurately predicts the behavior of systems under extreme gravity or high velocity - including the synchronization of GPS clocks - through a purely mechanical explanation within absolute space and time.
Below, note how the Omega factor is identical in formulation to the Lorentz factor, but with wildly divergent reasoning. This creates a duel of first principles. Since they both predict real world behaviour equally well, we will need another sort of experiment to distinguish which of them is actually mapping reality. Curiously, the evidence is also found within the engineering of GPS clocks. Click here to see how the Omega factor was derived.
Its formula is exactly the same as the one for the Lorentz factor (𝛾) that Einstein's relativity uses for calculating the same effects, but with a radically different explanation. The formula for the Lorentz factor follows:
The GPS Audit: Internal Consistency vs. Spatio-Temporal Fabrication
The synchronization of GPS clocks is frequently cited as the definitive proof of Einstein’s Relativistic Effects. According to special relativity, clocks in motion (like those on GPS satellites) tick more slowly compared to stationary clocks on Earth due to “relativistic effects.” Special relativity is said to cause the satellite clocks to lag behind Earth clocks by about 7 microseconds per day. It also says that through general relativity, clocks in a weaker gravitational field (like those in orbit) tick faster than those in a stronger field (like on Earth's surface). This results in satellite clocks gaining about 45 microseconds per day compared to Earth clocks. This leads to a net gain of 38 microseconds per day that must be corrected for accurate GPS positioning. While the data is verified, the interpretation is not.
Nature is a self-regulating mechanical system. At normal velocities, the relationship between rest mass and momentum is linear. However, as energy levels approach the ≥0.5c threshold, the added kinetic energy becomes a physical factor that alters the system's internal environment. The governing equation for electromagnetic interaction, Alpha (𝛼), must compensate for this energy increase to maintain system balance. Since 𝛼 contains only one variable - the speed of light - it enforces internal consistency by decreasing the local speed of light (\(c_{local}\)). This slows the rate of all internal interactions. Relativists mistake this mechanical slowing of physical processes for the slowing of "Time" itself.
The Duel of First Principles: Which Orthogonality?
Both EGNM and Relativity utilize the Pythagorean theorem to predict these non-linear behaviors, which is why they appear empirically equivalent at the level of raw data. However, they are fighting over the "Real Estate" of a single right angle.
- Relativity claims the Pythagorean theorem maps the orthogonal relationship between Space and Time (4D Spacetime).
- EGNM maintains it maps the orthogonal relationship between Rest Mass Inertia and Kinetic Inertia.
Because a Euclidean triangle possesses only one right angle, these interpretations are mutually exclusive. The experiment that settles this duel is the very system used to "prove" Relativity: GPS.
The Sagnac Effect: The Falsification of Invariant \(c\)
The core of Relativity is the Second Postulate: the claim that c is a universal invariant for all observers. This claim is fundamentally violated by the Sagnac Effect around a closed loop on a rotating platform. Upon returning to the source, they produce a measurable interference pattern. Laboratory observations - consistently verified through 2025 in high-recision laser-gyroscope technology - confirm that the beams do not arrive simultaneously. The beam moving with the rotation takes longer; the beam moving against it arrives faster. This is the very definition of Galilean velocity addition (\(c \pm v\)).
To maintain synchronization between satellite clocks and ground receivers, GPS engineers must account for the Earth’s rotation by applying the Sagnac Correction (\(c \pm v\)). Because the Earth is also a rotating platform, every signal sent from a satellite to a ground receiver must be adjusted for the "Sagnac Effect." If the "Second Postulate" were a law of nature, this correction would be zero. You can verify these results yourself by copying the provided expert prompts into your favourite AI. The answers it will give are listed below:
- The Sagnac formula relies on \(c \pm v\).
- In this specific equation, \(c\) is treated as a Variable.
- A receiver in this context calculates two different relative speeds for light.
- This measured variability is incompatible with the Second Postulate of Relativity, which states light speed is always measured at \(c\) regardless of observer motion.
The Logical Verdict
Physics currently utilizes an Invariant \(c\) to justify "Time Dilation" while simultaneously using a Variable \(c\) (\(c \pm v\)) to ensure the GPS signal actually reaches its destination. Logically, a law (ϕ) and its negation (−ϕ) cannot both be true.
Since the variable nature of \(c\) is a mechanical requirement for global infrastructure, the Invariant \(c\) is empirically falsified. This settles the duel of first principles: the Pythagorean theorem cannot be mapping Space and Time, as that mapping requires an invariant c that does not exist. Instead, it is mapping the two forms of inertia defined by EGNM. We have cleared the theoretical obstruction of 4D Spacetime, allowing us to see the universe as a purely mechanical system based on WORK - but first, a dramatic and welcome plot twist.
Mr. Fletcher & a Suppressed 8th Grade Memory
At the very conclusion of writing this paper, I was seeking the most precise wording for Galilean velocity addition. I turned to a simple search for the term. To my profound shock, the results listed not two, but three fundamental functions of simple velocity addition:
“In simple scenarios, velocities can be added or subtracted based on their directions:
| DIRECTION | OPERATION | EXAMPLE |
|---|---|---|
| Same direction | Add | A car moving east at 30 m/s and another at 20 m/s has a total velocity of 50 m/s east. |
| Opposite direction |
Subtract | A car moving east at 30 m/s and another moving west at 20 m/s has a total velocity of 10 m/s east. |
| Perpendicular directions |
Use Pythagorean theorem |
A boat crossing a river at 3 m/s and the river current flowing at 4 m/s results in a total velocity of 5 m/s at an angle. |
UNBELIEVABLE. I had spent 3 or 4 months operating under the presumption that Galilean mechanics was limited to the first two scenarios - the \(c \pm v\) dynamics of the Sagnac Effect. The shock was made more vivid because the example used was the exact one my 8th-grade teacher, Mr. Fletcher, had used to teach us the Pythagorean theorem in Toronto decades earlier.
In Ontario, Grade 8 students are typically 13 turning 14. I vividly remember that lesson; it focused on a boat’s effective speed as it crossed a river’s current. It was intuitively clear to all my classmates and me. By 13, we all understood the lesson of the Pythagorean hypotenuse instinctively because we had all played sports and knew how to hit a moving target. We didn't need "curved spacetime" to understand that a boat moves faster diagonally; we just needed Mr. Fletcher’s everyday relatability of using maths to solve real world problems. See Figure 7 below to see how the hypotenuse is resolved using the geometric shape of a right angle triangle.
Had I remembered that Grade 8 lesson earlier, the entire tone of this white paper would have been different. First, I would have bypassed all the unnecessary complexities of unraveling Relativity. I would have gone straight to the illustration in Figure 7, a simple intuitive explanation of Pythagorean maths that easily translates into exponential dynamics.
Second, on a personal note, I would have understood from the outset that I wasn’t pioneering a "new" theory; I was performing an emergency editorial correction on a grave historical oversight. There is something profoundly humbling and satisfying about realizing that the "complex" answer we’ve been chasing is actually the fundamental answer Galileo gave 400 years ago! We now understand empirically that the "magic" of Relativity was just a tragic misapplication of the third leg of Galilean addition.
The Pythagorean Theorem is NATIVE to Simple Velocity Addition!
This realization that the Pythagorean theorem is not the exotic "Relativistic" correction as modern physics lore treats it, but a native function of the third leg of Galilean velocity addition changed the role I played in this story. I felt less like Sir Isaac Newton and more like part of the audience in an M. Night Shyamalan movie - where shocking plot twists reveal that the answer had been in the room the entire time. It meant that credit for the paradigm that supersedes Relativity does not belong to me, but to Galileo and Newton.
While this framework was independently rediscovered through the arduous, months-long logic audit documented in these pages, the Law of Priority is absolute. In science, while all independent inventors are credited with the results of their work; the name on the discovery belongs to those who FIRST declared it to the world! That priority must also extend to the origin of using the Pythagorean principle in physics. This paper, therefore, is not an "extension" of classical mechanics, but a return to its full realization. The "E" is dropped. This is Galilean-Newtonian Mechanics (GNM). Had the third leg of Galilean addition (the Pythagorean resolution) been correctly applied to the orthogonal relationship of inertias, the detour into 4D Spacetime would have been avoided. The details of the how and why of that nearly impenetrable detour follow.
The Return to Sanity: A Restoration of Mechanical Reality
As Grade 8 maths demonstrates, a boat crossing a river moves on a diagonal path - the hypotenuse. This intuitive resolution of vectors is the foundation the \(20^{th}\) century detour ignored. To understand why physics lost its way, we must return to the initial disagreement between Michael Faraday and James Clerk Maxwell. Faraday, the master of experiment, knew that light was a self-propagating disturbance along physical "lines of force." The mutual self-propagation of the electric and magnetic fields were an internal mechanism that didn’t require a medium (aether) to propagate. This has now been proven by science and electromagnetic radiation is known to unique, not acting like all other types of waves! However this was not known in Maxwell’s time, and Maxwell was a devout believer in the "Aether" - a hypothetical static medium, that was supposed to be the medium through which light traveled. It was “static” because it was supposed to fill the whole universe so where could it move to.
Maxwell committed the foundational Category Error: he ascribed the intrinsic properties of electromagnetic fields to this non-existent aether. A category error is a mistake in logic. It is a mistake in which something is incorrectly assigned to a category it does not belong to, or a property is attributed to something that does not exist. Some examples are saying: “The number 10 is red,” because numbers don’t have colours. Or asking the question: “Who plays the role of team spirit?” Because team spirit is a collective quality, not an individual role. Another example is saying: “Horses that have horns are called unicorns,” because there are no horses that have horns. Maxwell’s mistake is most like the last example. The horse represents the properties of magnetic and electric lines of force; the unicorn horn stands for the ether.
These dual properties were given the names “permeability” by Lord Kelvin in 1872 for the factor associated with magnetism; and “permittivity” by Oliver Heaviside in 1885 for the property associated with the electric charge. By defining the speed of light through a medium, Maxwell built a "Logic Bomb,” an internal contradiction into his equations: \(c = 1/{\sqrt{ϵ_{0}μ_{0}}}\) that I will now help you to unpack. It’s easy for two reasons. One, because such corrections are based on logic not mathematics. And two: since the mistakes were compounded, with each new scientist doubling down on the mistakes of the previous scientist, when one starts to fall, the rest follow like dominoes.
Why Non-Scientists Can Spot & Resolve the Mistakes
The logical error we’ve defined came with associated maths, of course. But because the root mistake is a category error - an error in logic, not mathematics - its resolution is not mathematical. But the application of solid everyday logic, Not many people can solve maths problems, but everyone can use logic! We will use a simple mechanical rule from the equation for Work to ground our logic in reality.
How WORK Dictates Results
The nature of science is governed by the formula for Work: \(W = F⋅d\). The mechanics of universal interactions all follow this formula. Physical science is based on measurement. What does that mean? Within a mechanical system, work (\(W\)) and force (\(F\)) must interact across a measurable distance (\(d\)), or to be more technical - displacement. Science is the empirical cataloging of measurable interactions; if there is no physical displacement \(d\), no work can be done, and no measurement can exist. All waves that use a medium have something in common, all real media (water, air) have a velocity component so that the phenomenon, like a sound wave, can be measured both when its source is stationary, or moving. Since Maxwell attributed real properties to a fake medium, what do you think would be missing?
If you said the “velocity component” all real media have, you are correct! Because it was a category error, Maxwell’s formula for light was different. While mathematically elegant, it lacked a physical “component” \(v\) for the velocity of the medium because the medium itself didn’t exist. When scientists tried to apply Maxwell’s math to a moving frame, the equations "fell apart" because they were trying to move a system (Maxwell’s permittivity and permeability) that wasn't dynamically attached to anything real.
The Dominoes Start to Fall
This leads us to the first domino: The Empirical Sagnac Anchor. In 1887, Michelson and Morley attempted to detect the "Aether Wind" using the speed of light (\(c\)). They used an interferometer - an instrument that splits a beam of light into two orthogonal directions on a horizontal plane. One arm was aligned parallel to the Earth's motion (forwards and backwards), while the other was perpendicular to it (left and right).
Expected Findings (The Aether Myth)
If the aether existed, the beam moving parallel to the Earth's motion would be buffeted by the "wind," creating a speed of \(c + v\) in one direction and \(c - v\) on the return. Meanwhile, the perpendicular beam would be forced to "aim upstream" against the aether current to hit the mirror, resulting in a slightly slower round-trip time calculated by the Pythagorean formula: \({\sqrt{c^{2} - v^{2}}}\). They expected to see a difference in these two travel times - a "fringe shift."
Actual Findings (The Work Formula in Action)
Despite their meticulous design, the experiment yielded a Null Result. There was no fringe shift. The Work Formula (\(W=Fd\)) proved that a non-existent aether cannot exert force (\(d=0\)). This experiment (1887), coupled with the later Sagnac findings (1913), are two sides of the same coin: one showed null results for a non-existent source of motion; the other showed simple velocity addition (\(c + v\)) and subtraction (\(c - v\)) when moving in the direction, and opposite to the direction of motion respectively for a real entity. The Earth exists. And it really is a spinning platform, hence the real world data! It's that simple. As for the Michelson and Morley, the travel times remain identical not because the speed of light is "magic" or invariant, but because it is an independent projectile whose vector math naturally accounts for the motion of the source. If the source doesn't exist, there is nothing to add or subtract. Believers in the aether panicked and doubled down on Maxwell’s category error.
Compounding the Error
The second domino is the Transformation Collapse. Hendrik Lorentz, desperate to save Maxwell’s aether, proposed that the "Null" result was a trick of nature. He tried to justify the constant speed of light by tinkering with its mathematical definition: \(Speed = distance/time\). Faced with the stark choice between recognizing that the Aether was wrong, or to “fudge” the mathematics to justify a constant \(c\), Lorentz made a fatal choice: he insisted the speed of light (\(c\)) must be a "Magic Constant" that never changes. How did this lead to "Bendable" Space and Time, you may ask? If \(c\) needs to be measured as exactly the same 299,792,458 m/s in all circumstances within the aether (as they interpreted the results), then the only way to make the math of \(Speed = Distance/Time\) work is to change the definitions of "Distance" and "Time." If the Speed (\(c\)) is locked (Invariant), then Distance and Time must become the variables. Lorentz used the Pythagorean theorem to make Space and Time "relative” or “bendable” to protect the supposed ”Invariance" of \(c\). This was a "Transformation of Measurement" - an accounting trick to keep the speed of light "Invariant" (\(c\)). Lorentz built a house of mirrors including Length Contraction, and the “slowing down” of clocks (Time Dilation) to hide the fact that the Aether wasn't there. Einstein later threw away the light carrying Aether but kept Lorentz’s "fudge factors," and thus, the bendable fabric of Spacetime was born.
Notice that the speed of light had now been changed from a constant value whose effect velocity could vary depending on the motion of the observer and source to an Invariant value. That is something totally different. Compare the differences for yourself below:
- CONSTANT VALUE: A quantity that does not change with time or location but it can appear different when perspective is taking into account. A constant value follows vector addition and the rules of simple Galilean velocity addition.
- INVARIANT VALUE: A quantity that remains the same for all observers, frames, or coordinate systems. Its sameness is independent of perspective, hence its description as a universal.
In short, a constant is unchanging, but its effective value is frame-dependent. An invariant value is frame-independent and universally the same. That means the same without exception. This is why the Sagnac Effect, which shows the speed of light follows Galilean simple velocity addition falsifies the baseless assumption that \(c\) is Invariant. And all this: its invariance and the resulting mathematical tricks to make Space and Time relative was all to save face for a ’null” result from a non-existent entity. Lorentz treated contraction and time changes as real physical distortions caused by motion through the aether, instead of using the most basic formula governing the whole universe \(W = Fd\) to accept that if no displacement could be measured it meant there was no entity to cause the displacement. He chose to try and mathematically manipulate the very nature of reality to explain away why the aether couldn’t push or pull \(c\)! Of course, it was all for nought, as the Sagnac Effect definitively proves \(c\) is not Invariant, but a constant, a vector that follows simple Galilean velocity addition. But the error got worse.
Tripling Down on a Category Error
The third domino is the "Real Estate" Victory. Albert Einstein took Lorentz’s mathematical levers of “Length Contraction” and “Time Dilation” and renamed the aether "4D-Spacetime.” This was the ultimate "Triple Down" on the initial category error. For over a century, the complexity of this math served as "Semantic Armor," making the theory appear untouchable. But the Law of Non-Contradiction is the final arbiter of reality. The Pythagorean theorem only has one right-angle relationship to offer. An idea (ϕ) and its negation (-ϕ), cannot both be true simultaneously. Since the Sagnac Effect proves that light speed is variable (\(c \pm v\)), then the right angle cannot be mapping "Space and Time" as these factors depend on the exact opposite premise: Invariant \(c\).
Coming Full Circle
This brings us to the fourth domino: The Transformation of Energy. With the "measurement fudge" of Relativity falsified by empirical signal transmission in GPS and laser-gyroscopes, the Pythagorean "Real Estate" is reclaimed settling our duel of first principles. When two polar opposite interpretations of reality vie for the same right angle orthogonality, only one can win. Experimental results reveal the two legs of the Pythagorean right angle to be Rest Mass Inertia and Kinetic Inertia. We are no longer dealing with the warping of a 4D fabric, but with the mechanical resolution of orthogonal energy states. This is Exponential Dynamics - a purely 3D mechanical reality where the "Omega Factor" accounts for the non-linear resistance to motion as objects approach \(c\).
The "Category Error" is now fully exposed. Maxwell had the right number but the wrong reason; Lorentz had the right geometry but the wrong application; and Einstein had the right predictions but the wrong foundation. The key here is to realize that any and everything that follows Galilean Velocity Addition (like the Sagnac Effect) must also follow Galilean Transformations. This collapses the house of cards. The "levers" of Relativity are replaced by the simple, absolute vectors of Galilean-Newtonian Mechanics.
In the end, we return to the absolute nature of the Work Formula (\(W=Fd\)). The universe is not an incomprehensible geometric abstraction; it is a mechanical system of forces and displacements. Experimental results, from the laboratory interferometer to the orbiting GPS satellite, demonstrate that we live in a vector-based, mechanical universe. The simplicity of Grade 8 geometry remains the highest peak of scientific integrity. The detour is over, and the foundations of Newton and Galileo are restored to their rightful place as the "Source of Truth" for scientific knowledge.
The Architecture of Energy: Spinors vs. Vectors
To move from the mechanics of motion to the mechanics of the cosmos, we must distinguish between the two ways nature "packages" energy: particles with mass (matter) and electromagnetic radiation (light). This distinction is defined by their Spin - the internal geometric logic of the particle.
Defining Matter: The Tethered \(720^{\circ}\) Spinor
Matter particles, such as electrons, are Spinors defined by a \(720^{\circ}\) spin. Unlike a macroscopic object that returns to its original state after one \(360^{\circ}\) turn, a spinor requires two full rotations (\(720^{\circ}\)) to untangle its relationship with the environment. This is not a mere mathematical curiosity; it is a mechanical tethering. Imagine a particle connected to the boundaries of the universe by flexible ribbons. A \(360^{\circ}\) rotation twists these ribbons, creating a distinct "twisted" state. Only after \(720^{\circ}\) can the ribbons be untangled back to their initial state without moving the endpoints at the "edge of the universe."
The Finite Necessity
Standard physics dictates that for spinor mathematics to work, the particle must be "simply connected" to the entirety of space. However, "entirety" is a finite quantity - and a finite quantity can never encompass an infinite one! Infinity is unbounded. Finite structures are bounded. You cannot tether a particle to an infinite, unbounded void. Therefore, the very existence of an electron - a \(720^{\circ}\) spinor - is mechanical proof that the universe possesses a boundary. This tethering creates the Pauli Exclusion Principle, providing the structural "real estate" that allows atoms to form shells rather than collapsing, making the Periodic Table - and life itself - possible.
Defining Light: The Untethered Spin-1 Vector
Photons are Spin-1 Vectors. They require only a single \(360^{\circ}\) rotation to return to their original state. Mechanically, they are "untethered." Visualize a screw entering wood: one full rotation results in a specific forward displacement.
Because photons lack the "ribbons" of mass, their rotational energy is fully translated into forward motion in a 1:1 ratio. This untethered nature is why an unlimited number of photons can occupy the same quantum state, allowing light to overlap in concentrated beams like lasers - in contrast to the behaviour of matter particles.
Topology: The Geometry of Connectedness
To understand how these systems interact, we must look at topology - the study of geometric properties that remain unchanged under continuous deformations like stretching or bending. As defined by the topological principle, some geometric problems depend not on the exact shape of an object, but on how its parts are put together. These are global features, such as continuity and connectedness.
Think of this as geometry made of malleable clay rather than rigid rulers. In this view, a circle and a square made of plato are topologically equivalent since both are "connected" in the same way; they both possess an inside and an outside. You can transform one into the other without cutting or gluing. This mathematical understanding will shortly come in handy.
Dimensions vs. Domains: The Absolute Foundation
While the original Galilean-Newtonian framework was established before the expansion of the universe was understood, their concepts of absolute time and space require only subtle mechanical revision. The rate of temporal processes is affected by exponential dynamics; similarly, space is variable in volume. If the universe has grown in size over time, space must be variable in volume while remaining consistent in its qualities as a separate, absolute dimension. This allows space to be "absolute" in its geometric existence while being "variable" in its physical volume due to expansion. Similarly, Time remains an absolute fundamental function, even though the rate of physical processes (like clocks) can be slowed by Exponential Dynamics.
The GNM Mathematical Toolkit
Having identified the mechanical difference between tethered matter and free-moving light, we can now look at the revised equations that govern their interaction. These are the tools that allow GNM to map the dynamics of the Cosmos, from the subatomic to the celestial:
1. The Fine Structure Constant - Alpha:
2. The Full & Revised Energy/Mass Equivalency:
3. The Omega Factor (𝛀) for Velocities \(≥0.5c\)
4. Newton's Revised Law of Universal Gravitation
BREAKTHROUGHS
The "Axis of Evil"
The "Axis of Evil" is a persistent alignment of large-scale features in the Cosmic Microwave Background Radiation (CMBR). It forms an invisible line defines the center of the whole universe. Additionally, the CMBR’s dipole sits perpendicular to Axis on the Ecliptic Plane. The CMBR itself is a spherical shell encompassing the entire universe. It sits behind all galaxies, and proves that the universe is asymmetrical. It is a finite volume with both a center and a boundary. For the standard model, this is "evil" because it violates the Cosmological Principle (the assumption that the universe is the same in every direction). Since it is backed by verified data, we will dispense with the misnomer "Evil," and focus on the proper term: “The Axis.” The Axis is thus the fundamental proof of a finite, 3D universe.
Correcting a "Nobel" Mistake: The Death of Dark Energy
In 2011, the Nobel Prize was awarded for the discovery of an "accelerating" expansion, necessitating the invention of "Dark Energy." However, this was based on a methodological failure in 1998, where a small sample of supernovae (less than 80) suggested a discrepancy between Redshift (speed) and Luminosity (distance). However, this conclusion contradicted the logical pattern of a persistent decelerating expansion from the redshift data, since such data followed a 100 km/h - 0, instead of 0 - 100 km/h pattern. Even at first glance, this conclusion was unreasonable.
Since the light comes from a single event (a supernova), the two data points cannot be independent variables telling two different stories of that singular datasource. If they diverge (as they seemed to) it is an indication of a calibration error not a reason to invent a universal anti-gravity force. If my speedometer says 100 km/h but my GPS says I've only moved 50 km, I don't assume a "New Force of Nature" is stretching my car; I assume one of my instruments is poorly calibrated.
Resolving the Calibration Mistake
A landmark 2025 study by Junhyuk Son et al., analyzing over 3,000 standard candles, corrected this calibration error. The results confirmed a persistent decelerating expansion. Without evidence for an accelerated expansion, the existence of Dark Energy - and Einstein’s Cosmological Constant (Λ) - is falsified as they violate verified experimental results. The entire premise of the standard ΛCDM model is to explain the Nobel prize winning late-time acceleration: hence, it too is falsified! A decelerating universe is incompatible with the trifecta of tenets underpinning General Relativity: isotropy, homogeneity, and the Cosmological Constant (Λ). Hence its existence is independent corroboration of the already documented falsification of all three.
The Expansion Has Stopped
Because decelerating expansion is a cumulative metric incorporating the recessional velocities of all concentric layers of the Cosmic Web, we can state definitively that this deceleration has stopped. We observe both the far-redshifted galaxies (the historical 100 km/h metric) and our nearby galaxies, which exhibit no redshift (the 0 km/h metric). Logically, if \(v=0\): the process is complete.
The Finite Universe & The Central Circle (The Habitable Zone)
The most direct consequence of an asymmetrical universe is that it is finite: possessing both a center and a boundary. The CMBR lies at this spherical boundary. Its encoded Axis data traces an invisible line that defines the center of the CMBR - and therefore of the universe - because the two are congruent in scale and geometric form. . Remarkably, this Axis aligns with the North Ecliptic Pole and the Ecliptic Plane - orientations unique to the Earth–Sun system. The North-Ecliptic Pole is a line that runs through the North and South poles of the Sun; and the Ecliptic Plane defines Earth’s orbital plane around the Sun.
This universal "crosshair" does not point to the Sun or the Earth as random locations, but specifically, when taken together: to the Earth's Habitable Zone. This confirms that the universe is a 3D Bounded Architecture that underwent a quantum mechanical "stretch" rather than an unguided explosion. In an explosion, mass determines velocity; in a data-backed stretch, location determines velocity (\(v=Hr\)). The concentric spherical layers of velocity we observe are proof of a structural stretch of the asymmetric Cosmic Web.
3D Architecture of Space: Stretching vs. Explosion
In an explosion, mass/momentum sorting occurs. On the other hand, in a Stretch, location determines velocity (\(v = Hr\)). This is a powerful way to prove that the quantum “Cosmic Web" is the active participant, not the normal matter galaxies themselves. In the universe, the recessional velocity of a galaxy is determined not by its mass, but by its distance (radius) from the center. In every direction, galaxies at the same radius move at the same velocity; regardless of their size. If the expansion were caused by an explosion, the lighter galaxies would have been propelled at the highest speeds, while the heaviest would be slower. Instead, we see concentric spherical layers of velocity that carry objects away from the Earth at the same rate, regardless of mass. A data-backed proof of a structural stretch.
Stretching Via Quantum Entanglement
Quantum entanglement creates a non-local correlation between particles regardless of distance. As established, inertia limits mass-bearing matter from reaching \(c\). Thus, for a particle with mass to move with zero kinetic energy, it must exist outside the classical laws of Inertia. A substance can change location without a physical "push" if it is controlled intrinsically through quantum entanglement. This is the distinction between traveling through space and Quantum Non-locality (the updating of internal states).
The central issue about the universe is that its expansion of the universe left no energy trail - making "Dark Energy" or "Inflaton fields" misguided placeholders. Instead, the expansion left a Dynamic Footprint. Put another way: there is one - and only one - way that the universe we see, could have been stretched! The key, is realizing that the quantum energy doing the stretching would have to be in a distributed state of macro quantum entanglement across all locations of coordinated stretching - simultaneously. This simultaneous, internal coordination means all entangled particles are treated as a single quantum object. Because information lacks mass and inertia, its "motion" is not constrained by the limit of \(c\), unlike with normal matter. Why is this necessary?
Scales ≥ than 300,000 km Necessitate Quantum Entanglement
The reason for superluminal speeds is not primarily for stretching faster than the speed of light, though it also satisfies that requirement. The central issue is the coordination of opposite ends of a spherical layer within the Cosmic Web. As soon as the diameter of a spherical network of halos and connecting filaments was greater than 300,000 km (the speed of light per second) it would be impossible to synchronize the coordinating signals. However the stretching of the universe displays just this sort of coordination on scales vastly larger than 300,000 light years, never mind 300,000 kilometers. That requires superluminal non-local quantum updates. That takes care of the scenario of superluminal coordination. For superluminal motion, the same is true as explained in this footnote.
The Identity of Space
Having established the experimental data - a mechanical, finite universe with a central habitable zone and a decelerating expansion - we must ask: since 4D Spacetime is debunked, what performs the functions of "Space" in GN Mechanics?
Defining the Scaffolding Dynamics of Space:
Space has two primary functions: to contain galaxies (acting as the scaffolding of their cosmic distribution) and to facilitate the coordinated stretching of universal expansion. GNM and the General Relativity are divergent in every way but one: they are both supposed to be explanations for the same phenomenon (one valid; the other falsified). Curiously, the confirmation of the Cosmic Web as Space comes the technically flawed, but directionally correct assessment of the relationship between visible matter and Cosmic Web as defined in Einstein's own theory. This is a relationship independently verified through the Tully-Fisher Relation. In standard physics, Einstein’s Field Equations (EFE) describe the "shape" of space. Since John Archibald Wheeler’s famous words: “Space tells matter how to move. Matter tells space how to curve,” this means the shape of Space should be concentrated wherever galactic matter is found - and nowhere else!
This is true but for opposite reasons: because the 3D geometry of the Cosmic Web dictates the distribution of matter in the universe. Nevertheless, Einstein's equations are a still a good estimate of that shape. So what happens when we apply them to the universe we see? The resulting shape of "Space" is the exact 3D geometry of the Cosmic Web. Again, when applying Newton’s principle of parsimony, to admit no more causes than are sufficient to explain appearances, we can only but reach one conclusion: if the functional, geometric shape of Space in the universe mirrors the exact structure of the Cosmic Web, then: The Cosmic Web is Space! Please confirm those facts for yourself by copying and pasting the following prompt into your favourite AI.
Consequently, the geometric asymmetry of the universe reflects the inherent 3D scaffolding of the Cosmic Web. This explains Earth’s central positioning. Resting as it does in the central halo of the Cosmic Web, it remained unaffected by the synchronized, superluminal "update" that drove the radial expansion of the outer layers. The geometric asymmetry of the universe (the Axis) reflects the 3D scaffolding blueprint of this Web. Earth sits in the central halo of this structure, which is why it remained the stationary reference point for the radial expansion of the outer layers.
The Mechanical Architecture of the Universe: Spinors and Photons
Pair production explains a lot. It demonstrates that the difference between matter and radiation is dictated by angular momentum, not merely energy. This distinction clarifies Einstein’s formula ( \(E = mc^{2}\)) and proves that light cannot be the mechanical source of the heavy building blocks of matter.
The Threshold of Pair Production
Pair production proves that angular momentum, not just energy, is the deciding factor in matter creation. Photons (Spin-1) can only produce electrons (Spin-1/2) because a single photon has double the "torque" of a single electron, once such a photon has enough energy, it can form an electron-positron pair. Photons, as we know, possess variable energies across the EM spectrum, with radio waves being the weakest and gamma rays the highest energy photons - at a frequency of \(10^{21}\) Hz. Only when photons reach the gamma rays range can they form matter. Specifically, at an energy of 1.022 MeV. Hence the angular momentum must match the minimum energy for two electrons (0.511 MeV x 2 = 1.022 MeV). This calculus reveals that angular momentum is the deciding factor. However, photons lack the mechanical "torque" to assemble protons or neutrons, which are 1,836 times more massive. This reveals the frequency limits of the EM spectrum. All other forms of matter “creation” are a misnomer. Any claim of creating protons or neutrons are always based on using preexisting matter, and never exclusively - light.
Photonic Matter Production is Never NET!
But there is a bigger hurdle to light being the source of normal matter. Photons only ever produce matter in oppositely charged pairs that self-annihilate on impact, turning back into light. This pairs are called matter/anti-matter pairs. That immediately defines the problem: we live in a matter universe and pure light collisions only have the capacity to create matter in units that mutually self-annihilate. Therefore, radiation cannot be the source of the net matter universe!
Misinterpreting \(E = mc^{2}\)
Many erroneously assume that matter originates from radiation due to the Big Bang narrative and a misunderstanding of mass-energy equivalence. Einstein’s equation calculates the conversion of mass into energy during nuclear reactions, but it does not imply that all matter is simply "frozen light." In fusion, only about 0.7% of the mass is converted; in fission, a mere 0.1%. In these processes, only the Binding Energy - the mechanical tension holding the nucleus together - is converted into radiation. The vast majority of the matter remains intact as "daughter products." Logically, matter did not originate from “cooled light." Light is a disturbance on the tethers of matter; therefore, the tethers (the infrastructure) must precede the disturbance.
Matter vs. Force Carriers: The Alpha Scaling
We now address the fundamental distinction between tethered energy (Matter) and untethered energy (Radiation). Both electrons and photons receive their internal energy from the same source: Proto-matter domain as empirically verified through Noether’s Theorem. However, they manifest as distinct “products” of a \(1/137\) downscaling governed by the Fine Structure Constant (𝛼).
In Photons (Spin-1), this downscaling produces a perfect 1:1 ratio between their internal frequency and their forward vector velocity. Because it rotates at \(c\) it moves at 299,792,458 m/s. But how does the rotation of the photon result in forward propulsion? Think of a screw being driven into wood: the threads translate rotation directly into forward displacement. This is why a photon’s energy is expressed purely as motion - it lacks the “anchor” of mass.
In Electrons (Spin-1/2), the energy is packaged within the \(720^{\circ}\) spinor architecture. Because these particles are tethered to the universal boundary, the energy from the alpha-scale cannot translate into forward velocity. Instead, it is “translated” into Faraday tethers, where it is stored as Rest Mass.
Resolving the “Local Loop” vs. “Infinite Tethers” Paradox
We have one more paradox from the old paradigm to resolve then all the pieces of the puzzle will fall naturally into place. Standard physics treats the \(720^{\circ}\) spinor as an abstract mathematical construct, leading to a paradox: Are the tethers "local loops" or do they extend to "infinity"? GN Mechanics resolves this through topology.
All real effects, have real causes! In a finite universe, a force with "infinite range" simply means the force is effective to the boundary of space. By extending the perimeter of the conceptual “local loop" to the universal boundary (the CMBR), we satisfy the \(720^{\circ}\) requirement perfectly. Topologically, a local loop and a universal tether are equivalent, but the latter provides a mechanical cause for mass, whereas the former is an abstract mathematical notion. As the universe is finite, the electron is "hooked" to a physical anchor. Mass is the tension created by these lines of force being anchored to the universal boundary.
The Origin of Mass: The Non-Arbitrary Constants
In standard physics, the values of nature's constants are treated as arbitrary "settings" with no known cause. However, in GNM, mass is recognized as the empirical result of a geometric energy downscale. When the Proto-domain’s inherent energy is scaled by α (\(≈0.007297\)) into the \(720^{\circ}\) geometry of a spinor, the resulting structural tension manifests as two fundamental, unchanging values:
-
The Elementary Charge (\(e\)): \(1.602176634×10^{−19}\) Coulombs
- This is the unit of electromagnetic tension created by the alpha-scale transition.
-
The Rest Mass of the Electron (\(m_{e}\)): \(9.1093837×10^{−31}\) kg
- This is the potential energy resultant (0.511 MeV) of that tension being anchored to the universal boundary.
The reason these figures "fit" so perfectly - and why they never change - is that they are not independent variables. They are the fixed products of the formula for Alpha (𝛼):
Because this ratio is locked, the mass and charge of the electron are fundamental. They never change.
Synthesis: How Inertia Becomes Mass
Factoring in the Axis resolves the longstanding mystery of mass. Newton defined inertia as an object’s tendency to maintain its state. When a force is applied to a tethered spinor, the particle resists change because it is physically anchored. The resulting tension is Inertia. Since inertia is the resistance to change, and we measure that resistance as mass, we can conclude: Inertia is Mass.
The Faraday Tethers: The "Bolt" of Reality
We have come full circle. Michael Faraday’s has been vindicated for his experimentally derived insight that his invisible "lines of force" were real. One hundred and fifty-nine years after his death, the man who was rejected by trained scientists due to his lack of formal education and mathematical skills has been proven - by that very discipline - to be correct. Maxwell, was wrong. He made a category error. Electrical and magnetic properties are relational features of their respective tethers. They are not the intrinsic properties (permittivity and permeability), of an aether. GNM restores their proper identity.
To update the metaphor: if the Faraday Tether is a "bolt," the Photon is a "nut" threaded onto it. Just like a nut and bolt, there is a 1:1 ratio between rotation and displacement. This confirms that Matter must precede Radiation. A nut cannot spin - let alone propagate - without the bolt already being in place. Photons are merely force carriers, transporting energy between pre-existing tethered structures.
Quantum Gravity
Einstein’s General Relativity was ostensibly a theory of gravity; now that it has been superseded, we must address the mechanical reality of how gravity actually works. It turns out Newton was right all along. With the GNM adjustments for extreme motion and gravitational density, his core principles remain the bedrock of physics. The only question he left open was the mystery of "action at a distance." It was merely the technical limitations of his era that prevented his identifying the empirical source of this dynamic.
General Relativity is famously incompatible with Quantum Mechanics. However, GNM demonstrates that gravity is inherently quantum. In fact, the quantum nature of matter is the sole source of its ability to gravitate. Though Newton acknowledged that he couldn’t identify the mechanism causing gravity, he nonetheless defined its dynamic impact.
The "Mechanical" Cause of Gravity
Gravity is a first-tier consequence of spinors providing matter with mass. To reiterate: matter requires inertia; inertia requires tension; and tension requires a universal boundary to tether to. Between a source particle and the edge of the universe, these quantum tethers interact with all other matter in their path. Because these tethers are quantum, they pass through other objects while remaining connected to the boundary:
- The Connection: An electron’s tethers extend radially to the universal boundary (the Celestial Sphere).
- The Interception: Any object in the "line of sight" between the electron and the boundary intercepts these tethers like beads on a taut string.
- The Result: The tension generated at the boundary is communicated through these "beads." This mutual, linear tautness creates an inward tug.
Mechanically, tension on a string is equivalent to an outward pull (tug of war). According to Newton’s Third law of "Equal and Opposite Reaction,” the inevitable “opposite reaction” to this outward tethering is an inward pull. If a spinor is tethered to the boundary, any object that "intercepts" that tether becomes a mechanical participant in the tension of its outward pull. The inevitable reaction of an ”inward tug" resulting from this mutual tension is what we call Gravity. If Faraday’s lines exist to propagate light (which standard physics accepts), then their gravitational pull is a mechanical certainty.
THE MOST IMPORTANT PART OF THE THIS WHITE PAPER: Lagrange Points
It is gravity itself that proves it cannot be a product of a curved spacetime. The common classroom demonstration of gravity uses a bowling ball on a trampoline to show how matter curves spacetime into a depression, causing a marble to "fall" toward it. The idea is that matter curves spacetime in its vicinity, and this curvature then affects how matter moves: which is the effect we call gravity. But there is an obvious problem.
Imagine two objects at opposite ends of a vast, empty universe. Standard physics agrees they would exert a gravitational pull on one another. But how is that possible if the curved spacetime is only in their local vicinity and the vast distance between them is flat? That means their mutual gravitational effects would only be felt once they were close enough to be in other’s “vicinity.” Only the universe spanning tension of Faraday Tethers can explain gravitational effects over such distances. To prove this, we must look to the "master experiment" conducted not by Newton or Einstein, but by the solar system itself: Lagrange Points.
Nature’s Defining Model of Gravity
There is a point between two bodies in space where their gravitational pull on each other cancels out. That is called gravitational balance. There are special points of gravitational balance called Lagrange Points: these are locations in space where the gravitational forces of two large bodies - and the centrifugal force of the system's rotation - balance out perfectly. Any rotating two-body system has exactly five such points. By analyzing their mechanics, we can discern if gravity is caused by “the curvature of space" or by the mechanical tension of tethers.
Gravitational Balance vs. Center of Gravity
In a system of equal masses, the balance point is exactly in the middle. In a system of unequal masses (like the Earth and Moon), the balance point must be closer to the smaller body to compensate for its weaker pull.
It is vital to distinguish the Gravitational Balance Point from the Center of Gravity. Think of a lopsided dumbbell: to lift it level, you must grab it closer to the heavy end (the Center of Gravity). However, to find the point where the two weights pull on an object with equal strength, you must move closer to the smaller end. The Lagrange Points are these points of balance, not centers of mass. You can watch this excellent video for a visual explanation.*
Centrifugal Forces
In any rotating system, an emergent force - Centrifugal Force - pulls objects away from the center of rotation. Think of a fast-spinning merry-go-round (see Figure 11); the faster it turns, the harder you must grip to avoid being flung outward.
In the Earth-Moon system, this force acts as a "secondary gravity" on the other side of the Moon. On the far side of the Moon, the combined gravitational pull of the Earth and Moon is countered by the outward centrifugal pull of the system's rotation. This creates a point of equilibrium, not due to the presence of mass, but due to the rotational motion of the system as a whole (merry-go-round).
There are three "line-of-sight" balance points:
- L1: Between the Earth and Moon (closer to the Moon).
- L2: On the far side of the Moon (Centrifugal balance).
- L3: On the far side of the Earth (Centrifugal balance).
The Two Types of Equilibria
Lagrange Points fall into two categories of stability:
- Unstable Equilibrium (L1, L2, L3): Likened to a boulder balancing precariously on a sharp peak. The slightest disturbance - a nudge toward either mass or a crosswind - will cause the object to "roll" out of balance (see Figure12).
- Stable Equilibrium (L4, L5): Likened to a boulder nestled at the bottom of a steep valley (see Figure 13). It is secure; if nudged, the surrounding slopes naturally push it back to the center.
Unstable Equilibrium: L1, L2, & L3
When a Lagrangian point is caused by two opposing forces, it is unstable for one of two reasons. First, if it is between the two bodies (L1) and you move it the tiniest bit towards either body, you break the balance of gravity. The second scenario has to do with L2 and L3 where the competing forces are the combined gravity of the two bodies against the counter-balance of the centrifugal force. Again, if you move the object toward either force, you will break the balance between them. Additionally, for L1, L2, and L3: any crosswind force that comes from the left or the right is destabilizing because the stabilizing force is only through one axis (the line running through both bodies).
The Geometry of Stability: L4 and L5
Stable equilibrium at L4 and L5 is caused by three competing forces arranged in an inverted "Y" formation. In the Sun-Earth system, for example, the Sun’s pull and the Earth’s pull act at a \(60^{\circ}\) angle to one another. A third centrifugal force vector acts as the "bottom leg," pulling away from the middle point. This is like a “Three-Way Tug-of-War." Because the vectors are spread out, any movement toward the Sun is countered by the Earth and centrifugal force; any movement toward the Earth is countered by the Sun and centrifugal force; and any movement toward the centrifugal force is countered by the Sun and Earth. This configuration provides Maximal Gravitational Stability.
Why Only Tethers Explain Adjacent Gravity
General Relativity claims mass bends space. However, L4 and L5 possess measurable gravitational stability in empty space where no mass is present to cause curvature. Standard physics relies on the term "Effective Potential" to describe this, but this is a mathematical label, not a physical mechanism. It describes a result without a cause. Relativity cannot explain why a "dead spot" of stability exists in a mass-less location.
Conversely, the Faraday Tether model explains this perfectly. Gravity is a second-order consequence of tether tension. In the Aharonov-Bohm Effect, an electron undergoes a phase shift because its tethers interact with a shielded magnetic field. Similarly, L4 and L5 are the inevitable mechanical result of three anchors: one centrifugal force and two radial tethers (emanating at \(90^{\circ}\)) interacting.
Gravity is Always Perpendicular to its Source: Never Tangential
Newton defined gravity as the attractive force emanating from the center of two bodies. Standard physics defines that as acting along the line connecting the centers of mass. For spherical bodies this means the connecting gravitational effect is always at a right angle to the surface of its source - never adjacent. The presence or absence of adjacent gravity is not noticed when calculating the gravity between objects, as by definition the line connecting those objects will always be at right angles to their surfaces.
The Significance of the 60-Degree "Crosshair"
But when such gravity occurs adjacent to the objects themselves (L4 and L5), in the absence of mass, then it becomes obvious that it is not the curvature of spacetime from the presence of mass that causes gravity, but the tethers emanating radially at right angels from their sources that does so! The only way to satisfy the requirement of: “acting along the line connecting the centers of mass” when there is no mass (L4 (and L5) is to realize that such lines (Faraday Tethers) are emanating in all directions (radially) from the two interacting bodies. Only tethers emanating perpendicular to the surface of both bodies would intersect at a 60-degree angle on the orbital path and can thus explain the presence of gravity at L4.
There is no mass at L4 and L5, yet there is a measurable "valley" of gravitational stability there! Standard physics is forced to invent the term "Effective Potential" to describe this stability, but this is a mathematical label with no physical substrate; it is a description of a result without a mechanical cause, violating the universe’s overriding governing formula: \(W = F⋅d\). There is no magic. Without a mechanism there can be no empirical consequence. Relativity has no physical mechanism to explain why a "dead spot" would exist at a location where no mass is present to do the curving! A further data-opposed detail - in addition to the falsification of: Invariant \(c\); isotropy; homogeneity; and the cosmological constant (Λ) - leading to its obsolescence.
Why Tethers are the Legend of Reality
The most important structure in the universe is the Cosmic Web for reasons we have already discussed. However the entity that has the greatest explanatory power for the majority of the phenomenon we observe are Faraday tethers! Almost every interaction in our daily lives is an interaction of the electromagnetic force. Tethers are responsible for the structure of matter, the propagation of all wavelengths of light, the presence of the CMBR, and gravity. If there is one entity whose comprehension offers the clearest understanding reality, it is Faraday Tethers!
The Omnifarious Utility of Faraday Tethers
If the concept of an endless swarm of tethers emanating from every object in the universe feels overwhelming, remember that it is already a cornerstone of established physics. We already accept that light is a disturbance upon the lines of force (Faraday Tethers) emanating from charged particles - specifically the electrons within surface materials. I am not invoking a new, conceptual swarm of ribbons to explain gravity; I am merely identifying the extended utility of the ribbons we already know facilitate sight. Here is a list of some of the features of Quantum Gravity.
The Infrastructure of Force: Gravity vs. Charge
Electric charge and gravity share similar equations because they utilize the same underlying infrastructure: the Faraday Tethers. Light is an electromagnetic disturbance along those lines; gravity is the tension that tugs the objects at either end of those lines towards each other. This is why gravity and electricity are the only two forces that diminish with the inverse square law; have infinite range (boundary of Space); and share a common equation structure:
Defining the Dimensions of Absolute Space
Dimensions are the blueprint of reality. Earlier, in the heading: “The Variability of Absolute Space and Time,” we spoke about the distinction between dimensions and domains. At that point we could still not delve deeply into that actual blueprint of a dimension as we still had to learn about how energy is packaged in tethered particles vs. untethered wavelengths. We had to learn about topology and why the universe is finite. Having that background we can now give an expanded answer to the question: “What is a dimension?”
One Dimension (x)
Imagine the universal boundary shaped like a capped straw, a hollow cylinder capped at both ends with a diameter matching an electron. An electron in the center would have tethers extending only to the two capped ends. It could move toward or away from those ends (Width), but not up and down, or left and right. The lesson: a particle can only travel along the direction of its tethers If the volume of the cylinder reflected the size of the universe, that would be a one dimensional universe, since the particle is constrained to a single axis.
Two Dimensions (z)
Now, imagine the boundary is a flat sheet of paper. The electron in the center has tethers attached to the four edges. It can move left and right (Width) and forward and backward (Depth). It is now a 2D system.
Three Dimensions (y)
Finally, imagine the sheet is now a box. The electron at the center has tethers extending radially in all directions to the boundary. It can now move in a new direction: up and down (Height). Since every direction in a radial space can be described as a combination of these three axes (x, y, z), we define only the axes - as the three dimensions of absolute Space. An electron is free to move in any direction where it has a tether, provided it is impelled by a net outside force.
The Boundary of Reality
A quantum entity’s parameters form the strict boundaries for normal matter. An electron cannot move past the end of its tethers; therefore, the Cosmic Web’s spherical boundary is the literal end of the universe. To the oft-asked question, "What is the universe expanding into?" The answer is: Nothing. There are no tethers - and thus no space or motion - outside the boundary. Conversely, physical matter, like rogue stars can exist outside can exist within the vast voids of the Cosmic Web’s (outside its network of halos and filaments) due to its tethers being attached not to the Cosmic Web itself, but to its Celestial Sphere!
Summary
- 1D (The Straw): A particle tethered only to the ends of a cylinder can only move on one axis (x).
- 2D (The Sheet): A particle tethered to the four edges of a plane can move on two axes (x,z).
- 3D (The Box): A particle tethered radially in all directions within a sphere can move on three axes (x,y,z).
Geometric Confirmation: The “Celestial Sphere”
In mechanics, geometry dictates dynamics. If the universe were a cube, all phenomena that currently follow inverse square laws - such as gravity, light, and sound - would instead follow inverse cube laws (\(1/r^{3}\)). The fact that these forces follow an inverse square law (\(1/r^{2}\)) is the structural proof that the universal boundary is a sphere. This 3D radial symmetry is why gravity follows the Inverse Square Law (\(1/r^{2}\)). As the Faraday tethers extend from a particle toward the spherical boundary, they must spread across a surface area that grows by the square of the radius (\(A = 4𝜋{r^{2}}\)). Since the number of tethers is a fixed constant, their density (the strength of the pull) must dilute by exactly (\(1/r^{2}\)). The geometry of the sphere is the structural proof of the law.
Why Gravity Follows the Inverse Square Law
We do not need to assume the inverse square law as a "brute fact"; GNM derives it from spherical geometry. A particle radiates a fixed number of Faraday spinors toward the spherical universal boundary. As these tethers extend outward, they must spread across an ever-increasing area. To understand why doubling the distance results in a four-fold increase in surface area, we look at the geometric scaling of the surface area formula for a sphere: \(A = 4𝜋{r^{2}}\).
The Mechanical Relationship
The reason for the “\(4x\)" jump lies in the exponent. Because the radius (\(r\)) is squared, any change to the radius is also squared in the final result.
- At Radius (\(r\)): The area is simply \(4𝜋r^{2}\).
- At Double Radius (\(2r\)): We substitute (\(2r\)) into the equation: \(A_{new} = 4𝜋(2r^{2})\).
- The Resolution: When you square the quantity \(2r\), you must square both the number and the variable, so that: \(A_{new} = 4𝜋(4r^{2})\) becomes \(A_{new} = 16𝜋r^{2}\).
The Comparison
Comparing the new area (\(16𝜋r^{2}\)) to the original area (\(4𝜋r^{2}\)), we see that 16 is exactly four times greater than 4. Geometric dilution means the total quantity of Faraday Spinors emanating from a particle is a fixed constant. As these physical lines of force travel outward, they do not "multiply"; they simply spread out to cover the surface of the sphere at that specific radius. Because the same number of tethers must now cover an area that is four times larger, the density of those tethers (the number of tethers passing through any square inch of that sphere) must drop to one-fourth (1/4) of the original density.
A Summary of Key Details
To summarize, the Axis allows us to define all four forces of nature in a cohesive way. They are all forces. They all come into existence through the fine structure constant scaling proto-matter to a threshold that makes the advent of all four forces possible. The normal matter regime then enables quantum behaviour at atomic and subatomic scales. However, aggregating atoms into larger and larger configurations makes them decohere into classical objects above that scale due to the left-handed asymmetry of normal matter. Additionally, normal matter requires the prerequisite of proto-matter to exist. Without a boundary upon which to attach Faraday spinors, there would be no tension, and thus inertia, and thus mass.
Additionally, Alpha is a proportionality constant: logically then, it must have source material that is scaling down to produce the resulting normal matter! So, proto-matter is the source of normal matter, in turn, particles with mass are the source of photons, since light is a disturbance upon Faraday lines of electrical force. Thus the character of the universe is defined by the nature of proto-matter. The shape, size, matter distribution, and expansion history of the universe are all reflections of the invisible, underlying structure of the proto-matter Cosmic Web - or Space!
Independent But Complementary Mechanical Layers
Space and the tethers of matter that are attached to its boundary, are independent mechanical layers in the universe. Space is responsible for the recessional velocities of all galaxies, while Faraday tethers, through gravity, are responsible for the cohesion between intra-galactic matter. Note that the dynamics operate in opposite directions. Stretching expands outward. Gravity attracts inward. The sequential outward stretching of concentric spherical layers of the one-piece network of halos, filaments and voids, from outermost layer moving fastest to innermost layer moving slowest defined the decelerated expansion we observe empirically at present. The fact that we can observe the redshifting of far-away galaxies does not mean the stretching (expansion) is still underway - as we are effectively looking back into time (the far distant past).
True Nature of the CMBR
The CMBR is not a “relic” of a Big Bang. Regarding its true nature, consider just two elementary facts we have learned: light is a disturbance on the preexisting tethers of matter; and inner spherical layers of the Cosmic Web cannot leap frog outer layers - as per the empirical proof defining the Hubble Flow! Since the CMBR comes to us from behind all galaxies, it must lie beyond all such galaxies - at the very edge of the universe. We then add the constraint that its light must then be a disturbance of tethers originating from that location since the rays are “incoming.” The obvious conclusion is that the CMBR is a unique physical light source positioned at the very perimeter of Space from whence it continuously emits signals to the interior of the cosmos.
The Significance of the Axis
What makes all these novel conclusions logical? The Axis. Everything we know about the universe we have learned from light. And one of the most significant insights gained from the only blackbody radiation source in the universe (CMBR) is the Axis. As we have seen all data falls into place once its empirical validity is untangled from hatred of consensus. When one starts with the right premise, the correct conclusions are - inevitable! Here is a summary of the eight pillars of the Galilean-Newtonian Mechanics.
GNM Memory Palace
The entire GNM architecture and its dynamics (list of 8 pillars) prove experimentally that the universe is defined by work - not magic. They can be easily recalled through the simple memory palace of a subway station. To visualize the GNM architecture, imagine the mechanical order of a subway station. The Train Tracks represent the invisible substrate of Space - the proto-matter Cosmic Web that serves as the universe’s structural foundation. The Train symbolizes the Faraday Tethers; just as a train requires the infrastructure of the station to function, these tethers must anchor to a universal boundary before physical matter (spinors) can exist. Together, these tracks and tethers constitute the universe’s essential hardware.
The "software" of this system is Light - the Conductor’s Whistle - which serves as our primary source of empirical knowledge. Because light is a mechanical disturbance upon pre-existing tethers, it is the third entity to appear, requiring both Space and Tethers for its propagation. The CCTV System represents the CMBR; far from a "relic" afterglow, it is a persistent light source at the boundary that continuously records cosmic data, from the Initial Conditions to the current orientation of the Sun. Despite being the largest structure in existence, this hollow "Celestial Sphere" exerts no gravitational pull on its contents, satisfying Newton’s Shell Theorem through tether-mechanics. Finally, the Subway Map with its "You Are Here" arrow represents the Axis - the geometric record of matter distribution that provides the absolute reference for our central location within the cosmos.
The Algorithm of Truth
Realize that all the ideas in GNM, from lines of force to \(720^{\circ}\) spinors to the Axis to the topology of tethering to the edge of the universe, to Noether’s theorem to balancing asymmetries among many others - are borne out of accepted physical theory! The only novelty is in how the pieces are fit together to reveal new breakthroughs, attain actual understanding at the fundamental level of reality. The Algorithm of Truth represents a new way of doing science: using AI to synthesize the abundance of existing data into cohesive, working models (scientific breakthroughs) that require critical human input for the vision, analysis and final judgment.
The Empirical Architecture of GN Mechanics
Next, we summarize the major milestones of Galilean-Newtonian Mechanics through 24 bullet points divided into 4 sections: The Substrate; The Two Domains; The Architecture; and The Dynamics.
PHASE 1: The Substrate (The "Bottom-Up" Mechanics)
Points 1 - 5 establish the physical tethers, the necessity of a boundary, and the new definition of measurement.
1) SPINORS
\(720^{\circ}\) spinor mathematics has two requirements: spinors must attach to a boundary to function, and the boundary cannot be nearby or of arbitrary length. In GNM, this boundary is the edge of Space, not "actual" infinity as there is no such thing, and infinity precludes a functional boundary surface.
- Proof: The Aharonov-Bohm Effect proves electron tethers are real through Empirical Consequences. While invisible, we measure their interaction with a shielded magnetic field via the shifting interference pattern of the electron.
- Orientation: The Stern-Gerlach Experiment shows that atoms have an ingrained up-down orientation. This is because all tethered half-spin particles are attached to the same boundary, providing a common North-South orientation relative to that boundary.
2) MECHANICS OF TENSION: Atwood Machine + Equilibrium Experiments
Experiments with tension (strings) establish that a string under tension communicates that tension to any object intercepting it. Gravity is the mutual "tug of war" between particles intercepting each other's tethers.
3) NECESSITY OF A BOUNDARY: The Axis
The Axis proves the universe is finite, with a center and an edge. But it doesn’t give empirical proof of the geometry of the boundary.
4) INVERSE SQUARE LAW: Geometry Dictates Function
Inverse Square Law: The \(1/r^{2}\) drop-off in gravity is structural proof that the universal boundary is a Sphere.
5) New Definition of Observation: “Empirical Consequences”
The mechanics of universal interactions follow the formula for Work: \(W = F⋅d\): work (\(W\)) and force (\(F\)) must interact across a measurable distance (\(d\)). That “measurable distance” is the Empirical Consequence. In turn, the empirical consequence defines all scientific observation. Its causes can be both visible (classical mechanics) and invisible (quantum mechanics). Both are real!
PHASE 2: The Two Domains (The "Source of Matter")
Points 6 - 9 identify Proto-Matter and the Fine Structure Constant (α) as the scaling factor.
6) ALPHA SCALING:
The Fine Structure Constant (\(𝛼 ≈ 1/137\)) is the scaling factor between Proto-matter (Dark Matter) and Normal Matter.
7) PROVING MATTER PRECEDED RADIATION:
CMBR data proves matter was already present at the "Initial Conditions" before light was emitted.
8) EM SPECTRUM + PARTICLE ACCELERATORS:
The EM Spectrum is sufficient only to create simple electron pairs. Moreover light only creates matter in matter/anti-matter pairs; and never in net quantities, violating any claims it produced the matter universe.
9) NOETHER’S THEOREM:
Verifies that Dark Matter has a right-handed asymmetry to balance the left-handedness of normal matter, thus empirically defining it as Proto-matter. Thus, while radiation serves as the binding energy of normal matter in the cosmos, normal matter itself is the product of Alpha-downscaling proto-matter.
PHASE 3: The Architecture (The "Container")
Points 10 - 15 define the spherical, finite geometry and the CMBR as a physical shell.
10) THE AXIS
Confirms a center (Habitable Zone) and a spherical limit.
11) AXIS & STRETCHING:
The asymmetrical distribution of galaxies reflects the invisible 3D blueprint of the Cosmic Web’s network of halos and connecting filaments.
12) GEOMETRY OF SPACE:
Recessional velocities and gravity follow the inverse square law, confirming a spherical boundary. Internally, the Cosmic Web consists of halos and filaments, but at its perimeter it is a continuous spherical envelope. This is visually confirmed by the CMBR (Celestial Sphere).
13) CMBR AS A PHYSICAL SHELL AT THE ”CELESTIAL SPHERE":
Light is a disturbance on Faraday Tethers. Incoming radiation from the edge of Space (behind all galactic matter) proves there are incoming tethers. Thus, the CMBR must be its own physical light emitting structure. Furthermore, while starlight approximates a blackbody, the CMBR is a true blackbody, marking it beyond all doubt as a distinct physical light source at the edge of Space.
14) OBSERVABLE VS. ACTUAL:
Since the size of the universe is determined by receding light, and the CMBR defines the "limit," the "Observable Universe" is not a slice of infinity; it is the actual finite limit of the entire universe.
15) NEWTON’S SHELL THEOREMS:
Explains why the massive CMBR shell has zero gravitational pull on the objects inside it; while internal point sources (objects within galaxies) gravitate toward each other.
PHASE 4: The Dynamics (The "Stretch & Signal")
Points 16–24 address expansion, \(c \pm v\), and the 100-to-0 deceleration.
16) EXPANSION AS A STRETCH:
Galactic velocity depends on distance from the center (\(v=Hr\)) - not mass determining velocity (explosion) - proving a structural Stretch.
17) SPACE GEOMETRY DICTATES MATTER DISTRIBUTION:
The Cosmic Web scaffolding determines where galaxies end up; not the other way around!
18) DECELERATION (JUNHYUK SON ET AL. 2025):
New calibration proves universal expansion was a smooth deceleration over time; not an acceleration. Dark Energy is debunked.
19) QUANTUM ENTANGLEMENT:
Due to the 300,000 km/h signal transmission limit of light: superluminal coordination of expansion could only happen via non-local quantum "updates" of the Cosmic Web.
20) TULLY-FISHER CONGRUENCE:
Tully-Fisher Congruence: Spacetime has been falsified. And galactic rotation and matter distribution match 3D Cosmic Web geometry, as proven by EFE calculations for non-existent Spacetime. This empirically defines the Cosmic Web as Space.
21) SAGNAC EFFECT:
Empirically falsifies Invariant \(c\) and restores Galilean Velocity Addition (\(c\pm v\)).
22) VARIABLE \(c_{local}\):
Experimental results (Muon decay, GPS) prove that under high inertia, the Fine Structure Constant scales \(c_{local}\) downward, preventing matter from reaching the speed limit of light.
23) 100-TO-0 METRIC & OBSERVATION LAG:
Expansion has ceased. Distant redshift is merely "Observation Lag" from a finished process.
24) THE ULTIMATE TEST OF THE GNM MODEL:
Perspective from the center (Earth/Sun) and from the boundary (CMBR Axis, Dipole, and Ecliptic Plane) agree - the universe is a finite and mechanical. GNM explains all observational data without "extra" parameters. Thus, redshift is Doppler, not “Cosmological.” This means the observed Hubble Flow is due to Earth really being at the center of the universe, with its decelerated expansion being via the stretching of an asymmetric Cosmic Web. You can confirm for yourself that this is a working model of the universe, by copying and pasting the following prompt into your favourite AI.
CONCLUSION: “One More Thing …”
The significance of this paper lies not merely in disproving the standard model of cosmology - the facts do that adequately. It is in mathematically identifying the source of the supplanting of verifiable truth - and by debunking it - re-establishing the incontestable, universal nature of Absolute Truth.
By demonstrating the robust link between objectively real phenomena and their mathematical mappings, such as: the emergent duality of inertial values resolved via the Pythagorean theorem; the universal particle symmetry found in Noether’s theorem; the unification of spinor mechanics with Faraday’s lines of force; and the identification of the Cosmic Web as the functional dimension of Space, we have shown that mathematics can indeed be an accurate descriptor of fundamental realities. These realities encompass the full range of truth, from the technical to the absolute. Since empirically derived mathematics is a map of reality, this also means it is discovered - not invented!
The Death of Perspective
When General Relativity offers two different perspectives, it claims both are equally valid. In contrast, GN Mechanics asserts that absolute truth is singular. For any perspective to align with the Truth, it must incorporate a simple vector relationship (simple velocity addition) to it. Since an idea ϕ and its negation -ϕ, cannot both be true simultaneously, without this absolute anchor, treating opposing views as simultaneously valid destroys certainty, and with it, authority.
"Certainty" here refers to empirical knowledge. The most pernicious consequence of the relativistic era was the erosion of objective Truth. For over a 110 years, many have wondered where the erosion of objective certainty came from? Was it from the rise of feminism? Was it due to the end of colonialism? Was it due to extending voting rights to minorities? Was it a downstream effect of mass migration? From whence, the diversity of standards? Its source was not cultural, but a “Category Error”: treating a Variable entity (\(c\)), as an Invariant universal constant.
A Unified Truth Framework
For the first time in history, mankind can establish absolute, empirical Truth. We now possess a cohesive, two-domain framework that encompasses every dynamic in the universe: from the “push” of entropy-inducing Newtonian mechanics, to the entropy-free, “internal updates” of macro-quantum mechanics (quantum non-locality).
This unification underscores that the Cosmic Web's macro quantum mechanical architecture defines the dynamics of the universe: from the formation of normal matter to decelerated universal expansion, to defining a new class of scientific “measurement.” The definition of “Observation” not as: “only detectable to human senses;” but as: the measurement of any effect with Empirical Consequences.
The dynamic duality of GNM renders it universally applicable since no phenomenon in the universe is so obscure that it falls outside the reach of both classical and macro-quantum logic. Thus equipped, the GNM framework becomes the “Rorschach test” against which the validity and underlying reasoning of all theoretical schema can be empirically tested.