Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 11

VARIABLE PLANCK SCALE MODEL

By: Paul Hoiland

Special thanks to the online yahoo group Stardrive former known as ESAA, Fernando
Loup, and many others who have aided this creative research.

:
ABSTRACT:

Cosmological theories and theories of fundamental physics must ultimately not only
account for the structure and evolution of the universe, the physics of fundamental
interactions but also an understanding of why this particular universe follows the physics
that it does. Such theories must lead to an understanding of the values of the fundamental
constants themselves. Moreover, the understanding of universe has to utilize
experimental data from the present to deduce the state of the universe in distant regions
of the past and also account for certain peculiarities or coincidences observed.

The prevalent view today in cosmology is the big bang, inflationary evolutionary model.
Although certain problems have remained, e.g. the need to postulate cold, dark matter in
amounts much larger than all the observable matter put together, dark matter not detected
so far in the laboratory or the recent need to re-introduce the cosmological constant, the
big bang cosmology has, nevertheless, achieved impressive results (Silk 1989). There
have also been recent observational evidence hinting at (Barrow and Magueijo 1998) has
recently been found which seems to be consistent with a time-varying fine structure
constant α = e2/(hc). A varying speed of light theory (with h α c) has also been proposed
by Albrecht and Magueijo(1998). Added to this we are confronted with a Pioneer
slowdown that seems to require some modification to General Relativity and evidence of
higher amounts of certain elements than the standard model can account for. These
mixed messages could be found via one common model that stems from both the Dutch
Equation[1] and Fernando Loup’s model for hyperdrive[2] that was based upon that
equation.

BACKGROUND:

Fernando Loup has in several published articles offered a possible theoretical method of
star travel via hyperspace out of modern brane theory. Part of this model involves the
usage and implications of a certain Dutch equation in which the Planck scale is seen as a
variable as far as size goes. A compact extra dimension has a completely different effect
on the Newtonian Force law. In a D-dimensional space with one dimension compactified
on circle of Radius R with an angular coordinates that is periodic with period 2p, the line
Element becomes
The force law derived from the potential that solves the Laplace equation
Becomes

noncompact space dimensions, then D=4, but D-2=2, so the force law is still an inverse
square law. The Newtonian force law only cares about the number of noncompact
dimensions. At distances much larger than R, An extra compact dimension can't be
detected gravitationally by an altered force law.

But you might be asking how this extra space manages to act like its horizon is set at the
horizon of our universe? Part of the answer lies in its own local velocity of light. If that
velocity crosses its own universe in 1 second then in essence as you shrink that universe
in volume size one still has a lightcone extending far further than our own. When you try
to compare both these frames even though C is a constant in any one frame the velocity
of C remains different from each other. The result is any information carried from our
space-time through it seems to transfer non-local due to differences in our measuring rod,
while information transferred from hyperspace to here is forced to remain local so that we
only get a fraction of the total information.

This is where the difference between quantum derived expectation values for the ZPF and
observed values comes into play. Quantum Theory deals with the Planck scale. By nature
it measures value from this external frame of reference and derives answers that do not
equal those based upon observation. If one knows the actual velocity of C within
hyperspace one can reduce those answers back to our observed ones simply by division
of those answers by that value for C there. That leads one to assume that the local
velocity of C is some 120 powers higher in hyperspace than here. Such a large velocity as
far as localized lab experiments go would seem infinite. But if we could perform
quantum information transfer via entanglement over a very large distance then one could
detect that actual local value for C in hyperspace. Dirac waves transfer through
hyperspace the same as they do here using the model I have proposed. The difference is
in the wavelength spread due to the much faster local velocity of C. The energy spectrum
is simply spread out to the point that we can only measure a small fraction of its total
energy per Planck unit here. That’s why we observe an energy for the vacuum some 120
powers smaller than theory predicts. Its actual energy is the higher value. But we only see
part of the picture due to the wave function spread. The only thing required to solve this
quantum problem is the acceptance of a two reference frame system instead of one.

The effect of adding an extra compact dimension is more subtle than that. It causes the
effective gravitational constant to change by a factor of the volume 2pR of the compact
dimension. If R is very small, then gravity is going to be stronger in the lower
dimensional compactified theory than in the full higher dimensional theory.

So if this were our Universe, then Newton's constant that we measure in our noncompact
3 space dimensions would have a strength equal to the full Newton's constant of the total
4-dimensional space, divided by the volume of the compact dimension. The actual
volume internal for hyperspace is set by its lightcone horizon. In hyperspace all four
forces (strong, weak, EM, and Gravity) are equal. But their transfer into our noncompact
3 space dimensions alters these forces to all look different.

This leads then to the issue that quantum information is different from normal
information, yet, it in its own frame it is the same. In theory, normal information could be
sent through hyperspace. But to get the correct picture of that information so as to restore
it correctly we’d have to measure the return over a far longer time period. What we’d get
is just bits of the information that we’d have to add together to get the whole message. In
essence every EM signal ever sent out has traveled through hyperspace. But we only get
the results back in a limited fashion here because of the frame difference. In essence
those signals traveled ahead in time all the way to their course end in a fraction of a
second there. But we only arrive at that point here in a much slower time rate.

Consider a 5-dimensional space-time with space coordinates x1,x2,x3,x4 and time


coordinate x0, where the x4 coordinate is rolled up into a circle of radius R so that x 4 is
the same as x4+2pR

Suppose the metric components are all independent of x4. The space-time metric can be
decomposed into components with indices in the three noncompact directions (signified
by a,b below) or with indices in the x4 direction:

The four ga4 components of the metric look like the components of a space-time vector in
four space-time dimensions that could be identified with the vector potential of
electromagnetism with the usual field strength Fab

The field strength is invariant under a a reparametrization of the compact x4 dimension


via
which acts like a U(1) gauge transformation, as it should if this is to act like
electromagnetism. This field obeys the expected equations of motion for an
electromagnetic vector potential in four space-time dimensions. The g44 component of the
metric acts like a scalar field and also has the appropriate equations of motion.

In this model a theory with a gravitational force in five space-time dimensions becomes a
theory in four space-time dimensions with three forces: gravitational, electromagnetic,
and scalar. But the idea that Dirac waves can carry through in hyperspace also brings up
in itself that there is more than 1 extra dimension at play here.

The solution for the scalar field φ smoothly interpolates between the two attractor
solutions, the function A(r) is singular. It behaves as log |r| at |r| → 0. Metric near the
domain wall is given by:

ds2 = r2dxμdxνημν+ dr2

This implies the existence of the curvature singularity at r = 0, which separates the
universe into two parts corresponding to the two different attractors each with their own
respective space-time. The relevant equation of motion for the interpolating scalars in the
background metric is:

φ”+(4A’+g,φ/g)(φ’)φ’+6g-1 P,φ=0

where at the critical points P,φφ is positive. If we assume that the solution of this
equation asymptotically approaches an attractor point φcr at large |r| > 0, so that g and g,φ
are then constant, A’ becomes negative constant, and φ’ gradually vanishes
at large |r|. Then the deviation δφ gradually vanishes at large |r|. Then the deviation δφ of
the field φ from its asymptotic value φcr at large |r| satisfies the following equation:

δφ”− 4|A′|δφ’= −6|g−1P,φφ | δφ,

This is equation for a harmonic oscillator with a negative friction term −|A′| δφ’.
Solutions of this equation describe oscillations of δφ with amplitude blowing up at large |
r|. But I think the solution to this runaway inflation at large |r| is exactly that found with
the variable Planck scale where at large |r| the universe simply recycles finding itself back
in only the enlarged Planck scale state it started with because the far side of the harmonic
oscillator at large |r| equals the initial stage of rebound in the first place.

As a further solution to the problems presented above we will first look at the Planck
scale itself. The Planck scale can be written as a function of some very well known
constants for which its expression was obtained by a research group at the University of
Amsterdam Holland[1]. In the Dutch equation

R=4Ρie20Gh-cross2m0/e0
G=6.67 * 10-11Nm2/Kg2, h-cross=6.626/2Pi * 10-34J s, e=1.6 * 10-19C, m0=4P * 10-7H/m,
and e0=8.854157817 * 10-12F/m to yield the known present vacuum state. Allow that the
value of e0 has varied higher over time during the history of the cosmos one finds that the
Planck scale would become larger as one went backwards in time, small at the BB stage,
increasing with time as the universe expands and local energy density begins to lower,
and eventually becoming large again to start the whole cycle over. At the BB stage the
effect here would be the same in a forward time fashion as the compacting process of
String Theory making the Planck scale itself equal to this hidden extra dimensional set.

An interesting comment can be made about the properties of the excitations around the
two gauged theory vacua. The gravitino mass near one critical point is positive M-
grav=Z−> 0 and the one near the second critical point is negative M+grav = Z+ < 0 since
its value at each critical point is the value of the central charge. This would lead to an
explanation of the different values of C in hyperspace compared to our 3-brane. This
includes a change of γμ matrices into − γμ, as well as of the representation of the little part
SO(4) of the Lorentz group. We need both versions of the theory to make acceptable not
only the vacuum state but also the excitations around each vacuum. We also need that
rebound energy state from out of LQFT to account for why inflation took place and why
there would never be a singularity in our model. It’s the large |r| and the local 3-brane
decrease in energy density that allows the Planck scale to increase in volume and force a
recycle stage prior to runaway inflation and deflation in either side of the oscillator. It is
also the divergence at large |r| that accounts for the accelerated expansion seen by
observation.

But the above also supplies a solution the Pioneer slowdown problem. If C can vary as
the Planck scale varies from region to region then it is not General Relativity that needs
to be modified at all. It would also account for why this slowdown seems to be pointed
sunward since the Sun has the most mass density in our local area.

THE MODEL:

The Friedmann-Lemaître-Robertson-Walker (FLRW) metric:

ds2FLRW=-dt2+a2(t)[(dr2/1-kr2)+r2(dΘ2+sin2Θdφ2)]

describes a homogeneous and isotropic universe. Here τ is cosmological time, (r, θ, ϕ) are
comoving coordinates, a is the scale factor and k = 0, ±1 the curvature index. The proper
radial distance is defined as ar. FLRW branes with k = 0 and brane cosmological constant
Λ, embedded symmetrically. The bulk is the Vaidya-anti de Sitter space-time with
cosmological constant , and it contains bulk black holes with masses m on both sides of
the brane. The black hole masses can change if the brane radiates into the bulk. An ansatz
comparable with structure formation has been advanced for the Weyl fluid m/a4 for the
case when the brane radiates, m = m0aα, where m0 is a constant and α = 2, 3. For α = 0
the Weyl fluid is known as dark radiation and then the bulk space-time becomes
Schwarzschild-anti de Sitter. The brane tension and the two cosmological constants are
inter-related as
2Λ=K2λ+k2Λ’

The Friedmann equation gives the Hubble parameter to Λ, m, the scale factor a and the
matter energy density ρ on the brane:

H2=Λ/3+(K2 ρ/3)[1+(p/2λ)]+(2mo/a4-α)

It is normally assumed that in the matter dominated era the brane is dominated by dust,
obeying the continuity equation

ρ+3Hρ

which gives ρ ~ a3.

But given the modification a variable Planck scale would add to such a model the brane
may be dominated by dust and vacuum pressure differences at the same time. With α =<>
0, the Weyl fluid is itself a variable and then the bulk space-time becomes a variable from
any of the Schwarzschild de Sitter types. In this case the universe is no longer a
homogeneous and isotropic universe. It in fact would be a composite whose general
global pattern tends to fit the homogeneous and isotropic universe type with
Schwarzschild-anti de Sitter the global normal on the bulk space-time. If you add in the
similar aspect from out of Loop Quantum Gravity of no singularity point and a recycle
from this model then some matter may be present from other cycles which would tend
over several cycle histories to push the cosmos eventually towards a dust dominated
model which is either flat or collapsing due to a rise in matter/energy density over that
history. At that future point given all the variables no one can predict with accuracy
which it will end up in even though I would tend to wager towards the latter.

Given all the above I would see our cosmos is somewhere in the early cycle succession
stage based upon aspects from observable cosmology at present.

REFERENCES:

1,) Stefan Kowalczyk, Quinten Krijger, Maarten Van Der Ment, Jorn Mossel,
Gerben Schoonveldt, Bart Verdoen, Contraints on Large Extra Dimensions
(pp12 eq. 14)
2.) Fernando Loup , Paulo Alexandre Santos, and Dorabella Martins da Silva Santos:
Can Geodesics in
Extra Dimensions Solve the Cosmic Light Speed Limit General Relativity and
Gravitation 35(10) p.1849-1855
October 2003

AUTHOR”S NOTES
:
1.) This model I am using has similar properties to the one used under Double Special
Relativity (see: Jerzy Kowalski-Glikman, Sebastian Nowak, Noncommutative space-time
of Doubly Special Relativity theories, Int.J.Mod.Phys. D12 (2003) 299-316). But the two
frames system here is different from the one employed there owning to the PV nature of
this model. The actual model basis employed and its implications can be found at:
Hyperspace a Vanishing act, http://doc.cern.ch//archive/electronic/other/ext/ext-2004-
109.pdf, Implication of the Dutch Equation Modified PV Model,
http://doc.cern.ch//archive/electronic/other/ext/ext-2004-115.pdf, and Why Quantum
Theory does not fit observational data,
http://doc.cern.ch//archive/electronic/other/ext/ext-2004-116.pdf

2.) The strongest basis for assuming that C stays constant in hyperspace is from
observations of the CMB itself. However, it is possible that C may also vary in
hyperspace. The implications of such have not been worked out in this model to date.
Also to be noted the K=0 in the original paper assumes that value for hyperspace itself
before inflation took place. The best fit currently with our space-time is that K would
equal 1. The usage of K=0 was to simplify the modeling. In reality I suspect that K=1 for
both space-time frames. At one time I had played with modeling K as a variable with
extra values to simulate a PV model where one can account for the Pioneer Problem and
an older PV based problem where C ought to increase instead of observational evidence
that it either stays constant or slows with time.

3.) Aside from the variable planck scale idea one is left with these solutions:

1. The equation of state w may differ from -1 by an observable amount, or may


change rapidly with time.

2. The dark energy may be a scalar field, coupled to matter in such a way as to cause
time variations in fundamental constants, or to violate the gravitational equivalence
principle.

3. The cosmic acceleration may be caused by a deviation from general relativistic


gravity on cosmological scales, rather than by a dark energy.

None of the above alternatives seems viable at this time.

4.) DIFFERENT BRANE MODELS:


This figure shows how different theories vary from each other as far as classification
goes. All modeling is
based upon the Friedmann-Lemaître-Robertson-Walker (FLRW) metric later modified by
Loop Quantum
Gravity.

5.) Using Λ=(β/β-3) 4ΠGρ from a non-viscous model (Arbab 2002). This form is
interesting since it relates the vacuum energy directly to the matter content in the
universe. Hence, any change in ρ will immediately imply a change in Λ. This
would apply both locally and globally which is also consistant with the variable
Planck scale model:

( Metric used here comes from Arbab I. Arbab The Universe With Bulk Viscosity, Chin.
J. Astron. Astrophys. Vol. 3 (2003), No. 2, 113–118 ( http://www.chjaa.org or
http://chjaa.bao.ac.cn with a different equation of state)

In a flat Robertson Walker metric

ds2=dt2-R2(t)(dr2+r2dΘ2+r2sinΘ2dφ2)

Einstein’s field equations with a time-dependent G and ¤ read (Weinberg 1971)


Rμν -1/2(gμνR)=8ΠG(t)Tμν+Λ(t)gμν,

Now cosmologists believe that Λ is not identically, but very close to zero. They relate this
constant to the vacuum energy that first
inflated our universe, causing it to expand. From the point of view of particle physics, a
vacuum energy could correspond to a quantum field that is diluted to its present small
value. However, other cosmologists dictate a time variation of this constant in order to
account for its present smallness. The variation of this constant could resolve some of the
standard model problems. Like G, the constant Λ is a gravity coupling and both should
therefore be treated on an equal footing. A proper way in which G varies is incorporated
in the Brans-Dicke theory (Brans & Dicke 1961). In this theory G is related to a scalar
field that shares the long range interaction with gravity.

Considering the imperfect-fluid energy momentum tensor

Tμν=(p+p’)uμuν-p’gμν,

this yields the two independent equations,

3(Ř/R)=4ΠG(3p’+p)+Λ,

and

3(Ř/R)2=8ΠGp+(Λ/8ΠG)R,

Elimination of Ř between the first and the second differentiated form of the equation
gives

3(p’+p)Ř=-((Ĝ/G)p+p’+ Λ/8ΠG)R,

where a dot denotes differentiation with respect to time t and p’=p-3ηH, η being the
coefficient of bulk viscosity, H the Hubble constant. The equation of state relates the
pressure (p) and the energy density (p’) of the cosmic fluid:

p=1/((γ-1)p’),

where γ=constant. Vanishing of the covariant divergence of the Einstein tensor and the
usual energy-momentum conservation relation Tμνν=0 leads to

8ΠGp+Λ=0

and

p’+3(p++p)H=0
One finds that the bulk viscosity appears as a source term in the energy conservation
equation.

Now if we consider the very special form (Arbab 1997),

Λ=3βH2, β=constant,

and

η=ηopn, ηo≥0, n=constant.

This is equal to writing Λ=(β/β-3)4ΠGp for a non-viscous model (Arbab 2002). This
form relates the vacuum energy directly to the matter content in the universe or in any
local region. Hence, any change in p will immediately imply a change in Λ, i.e., if p
varies with the cosmic time then Λ also varies with the cosmic time. If local matter
content varies so will the vacuum energy density.

Now what makes the difference is weather we have a decaying mode of the energy density
or an increasing mode of energy density globally and locally. From an expansion of the
cosmos perspective the mode is decaying while locally it will vary in mode. This yields a
stress energy tensor that also varies locally which leads to C being a variable itself which
indirectly implies that certain values of the Dutch Equation are a variable. That
translates to the Planck scale as a variable.

While the above equations are just one way we could express the general idea from a
cosmological perspective the example does point out that the general idea of expanding a
bubble of hyperspace around a craft is possible. I would also suggest this same modeling
offers a possible solution to some of our own sub-light methods of propulsion.

One other implication of this is the system has to always have some local energy density
present even at the start or one is left with a runway inflation of hyperspace. This issue
is solved by the fact that at large expansion R the Planck scale has a near infinite
probability of having virtual particles borrow enough energy to become real particles
which starts the collapse of the planck scale towards a second rebound point guaranteed
by aspects of Loup Quantum Gravity. This implies that not only may some matter
transfer over from other cycles accounting for more of certain elements than the
Standard model can account for. It also implies some elementary particles have always
been present even if the rebound point during collapse tends towards a high enough
temperature to wipe a lot of the entropy history clean.

The actual Dutch Equation is:

R=4Π2oGh-cross2mo/eo

G=6.67 * 10-11Nm2/Kg2, h-cross=6.626/2Pi * 10-34J s, e=1.6 * 10-19C, m0=4P * 10-7H/m,


and e0=8.854157817 * 10-12F/m to yield the known present vacuum state.
DISCUSSION AND CONCLUSIONS
The existence of horizons of knowledge in cosmology, indicate that as a horizon is
approached, ambiguity as to a unique view of the universe sets in. It was precisely these
circumstances that apply at the quantum level, requiring that complementary constructs
be employed (Bohr 1961). Today we stand on another horizon that seems to be
displaying ambiguity. This if forcing us as Scientists to rethink established dogma and
cosmological views. It has been my attempt here to offer just such a rework of an older
model into something that fits the observational evidence a bit closer. Further testing of
this idea could come via study of signals from more than one probe outside of our solar
system. It could come from experimenting with some of Fernando Loup’s ideas. It could
come from ways I have not even thought of at present.

At present the Universe is considered a general relativistic Friedmann space-time with


flat spatial sections, containing more than 70% dark energy and at about 25% of dark
matter. Dark energy could be simply a cosmological constant Λ, or quintessence or
something entirely different. There is no widely accepted explanation for the nature of
any of the dark matter or dark energy (even the existence of the cosmological constant
remains unexplained). This has been my attempt to come up with a solution that fits the
observational mixed signals at present.

You might also like