Recent Innovations about the Concept of "Universe" Dark Energy and an Accelerating Universe? part-1
Recent Innovations about the Concept of "Universe" Dark Energy and an Accelerating Universe? Part-1

A major clue to the open vs closed vs flat Universe seems to have been found in recent results from HST and ground "super-telescopes" observations. In fact, some cosmologists believe this assemblage of new facts to be in composite the most important and enlightening discoveries about the Universe since Hubble's observations more than 80 years ago. Some of what is presented in the next group of paragraphs has been extracted from an excellent review in the NOVA series on PBS, this November 2000 program being entitled "The Runaway Universe", from the references cited below, and from a recent book - The Accelerating Universe: Infinite Expansion, the Cosmological Constant, and the Beauty of the Cosmos, by Mario Livio, 2000, J. Wiley and Sons. The latest volume on this subject is The Extravagant Universe, by Robert Kirschner, Princeton University Press. A quick overview of the accelerating Universe idea is given at this Web site which deals with one of the currently competing concepts, Quintessence, a special form of the Dark Energy that seems to be powering this acceleration.

The basic premise behind this so-called runaway Universe is summarized in this diagram prepared by the Space Telescope Institute in Baltimore, MD:

Diagram showing the history of relative expansion during the Universe's lifetime.

Following the initial inflation during the earliest moments of the Big Bang, the strong gravitational forces, during the extended time when the Universe was smaller and matter/density was higher, led to attraction forces that caused a net deceleration in expansion (or, said another way, with time expansion should slow its rate). But a momentous discovery in the late 1990s has shown that, as the Universe continued to enlarge and its density reduced the gravitational effect, at some stage in cosmic time the Universe then began to re-accelerate (or, in effect, the deceleration rate itself slowed so that the galaxies started to move apart faster). Current speculation points to a repulsive force, probably in the form of "dark energy" analogous to that underlying Einstein's Cosmological Constant, that is gradually dominating the driving forces that determine relative expansion rates. Succinctly put, Dark Energy is a form of vacuum energy which has three defining properties: 1) it appears to be uniformly distributed (smooth) throughout the Universe, has been steady over cosmic time, and has a very low density (estimated at 10-26 kilograms per cubic meter in intergalactic space), 2) in contrast to the attractive force of gravity, it acts as though it impresses a negative pressure (producing the aforementioned repulsive force) on all matter in the Universe; it has been called "repulsive gravity" since it acts to counter the effect of conventional gravity; and 3) despite its relative weakness, it's influence increases in proportion to the increasing size of the Universe. Its net effect, at present, has been to cause the rate of Universe expansion to accelerate during the second half of Universe history. In the next few paragraphs, these ideas are expounded at some length.

Dark Energy has so far proved difficult to detect directly and thus to pin down as to its nature. (It is sometimes referred to as "Phantom Energy".) The prime supportive evidence for this renewed expansion acceleration is being found through determination of redshifts of certain Supernovae. The results from analysis of more than 60 Type 1a Supernova events lying between 4 and 9 billion light years from Earth (time-distance regions [intervals] in which the Supernovae included in the study are still close enough to be easily observed) imply faster rates of expansion at some stage in cosmic time than heretofore calculated from deep quasar studies. The Supernovae appear fainter than theory predicts for their distance from us, implying that they have moved farther than expected in the time involved, which would be explainable by an increase in expansion speeds at some stage. (As we shall see this could have occurred after expansion started or perhaps it is a "hangover" from Inflation.)

In fact, such rates indicate the Universe appears to be expanding too fast for it to be slowing down enough to finally contract. This speaks of an open Universe that will expand forever and leads to the corollary that there must be insufficient total mass (perhaps only ~ 30% of the amount needed and most of it being Dark Matter [the visible radiation components of the Universe such as galaxies comprise only 4%]) to ever close it). In order to account for this apparent acceleration, a form of energy that induces repulsion is assumed to exist.

Several ideas in the preceding paragraph are among the hottest topics of discussion in the astronomy and cosmology communities. In 1998, a conference on "Missing Energy in the Universe" focused on many relevant aspects. The announcement of evidence for an expanding Universe was judged the top scientific story of that year. Three review papers in the January 1999 issue of Scientific American present details relevant to this growing concept of an ever expanding and accelerating Universe. The interested reader should consult these papers (either in the library or at the three Internet sites cited as links below) but a brief synopsis of each is given in the next sequence of paragraphs.

The first article, Surveying Space-Time with Supernovae (Hogan, Kirshner and Suntzeff), enlarges on the nature of Supernovae. Supernovae are detected routinely by taking telescope photos of small segments of the sky at different times, superimposing them as computer images, compensating for differences in observation conditions, and subtracing later from earlier images (nulling out the same features that persist), and identifying residuals that have the appearance of Supernovae. The astronomers goal is to find a Supernova that is still in its early (front-end) stage of explosion and then follow its period of high luminosity over the next several months. These images show a "before" and "after" sequence in which a Supernova appears in the M88 galaxy in 1998 but this had diminished a year later:

Supernova in Galaxy M88.

Here is another example of a more distant galaxy (bright center) with a second bright area off center that is the Supernova.

A Supernova within a galaxy.

The main thrust of the article is a review of a particular type of Supernova. These, called Type Ia, are as small as ordinary stars that, after reaching the White Dwarf stage, continue to receive matter drawn from a companion star. This influx of new material onto the Dwarf increases its total mass until temperature rises to a level forcing a sudden thermonuclear explosion that results in a form of Supernova which lives for a short period whose duration depends on the mass. In turn, the mass determines the brightness. Generally, the amount of mass in the Type 1a that is involved in the Supernova is very similar in all such Supernovae, namely it is about 1.4 solar masses. When its temperature reaches about 10,000,000 ° K, its mass undergoes nuclear fusion causing the dwarf to explode rapidly. Thus, the energy released, in the form of detectable luminosity, is almost the same for every Type 1a event - meaning that these serve as nearly constant "standard candles". The luminosity decreases at a steady pace, in stages, so that the decline in luminosity is systematic. A Type 1a Supernova therefore is an excellent means of judging distance between it and Earth observers. If such a Supernova is detected almost at its outset (or its stage is known), the departure from expected luminosity output and the observed output is a function of its distance from Earth. The decrease in expected output follows the inverse square law that governs light source objects; thus a lamp of known brightness, observed first at position 1 becomes 1/4th as bright when moved to position 2 that is twice as far away.

A Type 1a Supernova occurs, on average, about once every 300 years in a galaxy but because there are many galaxies in which individual exploding stars can be resolved, a new one is found approximately once a month. Using this approach, astronomers have found a significant number of Type 1a Supernovae lying at distances of 7 to 4 billion light years from Earth (as determined by redshifts) that actually occurred at those times in the past, when the Universe was 1/2 to 2/3 its present size. Here is a Type Ia Supernova:

A Type Ia Supernova in the constellation Tycho.

The typical visual absolute magnitude of Type Ia supernovae is Mv = -19.3 (about 5 billion times brighter than the Sun), with little variation. Type Ia supernovae have a characteristic light curve, their graph of luminosity as a function of time after the explosion.

The curve shows a rapid buildup of luminosity during which the explosion occurs, then further buildup to a peak, followed by gradual diminution of luminosity. Near the time of maximum luminosity, the spectrum contains lines of intermediate-mass elements from oxygen to calcium; these are the main constituents of the outer layers of the star. Months after the explosion, when the outer layers have expanded to the point of transparency, the spectrum is dominated by light emitted by material near the core of the star, heavy elements synthesized during the explosion; most prominently isotopes close to the mass of iron (or iron peak elements).

Using a Type 1a Supernova as a "standard candle" (defined as a stellar light source whose luminosity or intrinsic brightness can be known with a high degree of accuracy and hence can be a means for estimating distance from Earth) has been a challenge, but work by several groups (the Perlmutter team; the Kirschner/Schmidt team, and others) has over the last decade achieved the ability to estimate brightness to within +/- 90% of actual value. These groups have found that Type 1a's brighten and then dim at variable rates. By analyzing brightness curves and applying theoretical corrections for the fade frequency, they have come up with a model that leads to a value for the brightness that allows it to act as an absolute magnitude event, for which distance (from Earth) can then be calculated with high accuracy. This distance should at face value indicate how far the star's radiation has traveled.

The surprise in these observations is that the Type 1a explosions were up to 25% less bright than they should be at the distances first postulated, i.e., these are fainter than expected assuming a simple slowing down of expansion owing to gravitational attraction. This relies on the postulated assumption that the Universe has continually been decelerating. Using redshift values for stars in the Supernova's galaxy, the expected rate of expansion for the galaxy relative to Earth as the observing point can be determined for the continuous deceleration case. But, the diminution from the expected luminosities implies that the Type 1a star and its galaxy are really farther away (and hence appear to us as less luminous than predicted from a steadily decelerating Universe) and must have reached their more extended positions because of increase in expansion rate at some time during the Universe's enlargement. Thus, those closer in are moving relatively faster which, since these appear to us as they are much later in cosmological time, indicates that the rate of deceleration of the Universe is in fact slowing down. Let's repeat for clarity: The observations are tantamount to this possibility: the Universe has not decelerated uniformly - when smaller, gravity slowed it more - but some accelerating process has reduced the deceleration rate beyond that expected with expansion, so that the divergence of space between galaxies is increasing faster than absence of that process would predict - something is pushing them farther apart more rapidly than the outward "coasting" of space expansion would suggest. To explain this, if subsequent studies continue to support the conclusion, requires either the presence of some type of energy that counteracts expansion or a different Inflation model.

This familiar analogy should help you to visualize this variable expansion: You are standing still in a race car with the motor running. With brake released, you hit the pedal hard. The car accelerates rapidly to full speed. Without any braking, the car is allowed to gradually slow down. After coasting some distance, you touch the pedal again, this time softly, and start to counter the deceleration by renewing the acceleration. This pretty much describes the Universe's situation as of now.

Let's examine this idea of renewed acceleration in terms of redshifts. This expansion of space causes the light waves to become farther apart, and thus longer, and red shifted. The more time light has been traveling, the more time there has been for space to expand, and the greater the red shift. Thus more distant objects have a larger red shift. The redshifts for the Supernovae were greater than predicted. The accelerating expansion of the universe was inferred because distant Supernovae are fainter than expected based on their red shift. This means that they are farther away than one would expect based on the linear increase of redshift with distance. This is interpreted as evidence that the Universe is expanding more slowly in the past, so that the redshift of distant objects is less than one would expect. When the light from distant objects began its journey, the universe, and therefore space, was expanding more slowly. Thus there was a smaller contribution to the redshift than for light that traveled more recently. Nearer objects are less subject to this slowdown, because the universe was expanding faster when light left them. Thus theirred shift is comparatively larger, in proportion to their distance from us.

The diagram below shows the relation between shift in star luminosity and redshift. What it shows is that Supernovae 1a luminosities (m - M) measured within galaxies whose redshifts were near 1.0 (about 5 b.l.y. away) (plotted in red( were less than expected from extrapolation of the line passing within other galaxies at very low z values (hence nearby and visible as they were recently). Although other explanations for the departure are possible, renewed acceleration is held to be the most likely.

Luminosity versus redshift z for a range of Supernovae.

Astronomers would like to confirm that the pattern of expansion rates pictured in the top diagram on this page (slowing - then accelerating) extends even further back in time beyond the 5-7 billion l.y. observations just described. In 1995, the oldest Type 1a Supernovae known, lying near an elliptical galaxy, at a distance of ~11.5 billion l.y., has had its redshift measured, with results consistent with the acceleration model. It is shown as a bright incandescent cloudlike object in the left image below; the right image is a time difference overlay of the scene for a two year period in which most objects cancel out leaving a bright spot that is the Supernova remnant centered at the arrow point within the envelope of ejected materials. Its intrinsic brightness is greater than expected for its age and distance, consistent with the idea that it was slowing down during that time interval rather than speeding up as indicated by expansion rates in the younger Type 1a Supernovae; this supports the argument that during the first half of the Universe's history space itself underwent progressively slower contraction but since then has been experiencing expansion..

Supernova1997, occurring in deep space (10+ billion l.y.).

A group of astronomers under study leader Steve Allen at the Institute of Astronomy at Cambridge University was successful in amplifying our knowledge of acceleration by analyzing data for 26 galaxy clusters at varying distances from about 1 billion to 8 billion l.y. away. These clusters are each surrounded by huge balls of hot gas. Three of the 26 are shown here, in images made by combining Hubble and Chandra data:

Gas envelopes around three named galaxy clusters at different distances from Earth.

From theoretical considerations, they deduce that these clusters should have fallen apart and dispersed by now, yet they are intact. They ascribe the surviving coherence of the clusters to the influence of Dark Matter. They have calculated ratios of the mass of the hot gas (which comprises about 85% of the luminous mass in a cluster) to the inferred mass of dark matter, using Chandra X-ray data. By assuming that the relative amount of gas is roughly the same proportionately at every cluster, these gas balls serve as another indicator of a luminous source of specifiable output, analogous to the Type 1A Supernova as a "standard candle". They calculate the distance to each using the redshift z values. Their findings: the oldest clusters show deceleration but about 6 billion years ago this reversed and the younger clusters are now accelerating (are out too far; their present distance being more than expected from a non-accelerating situation). A constant dark energy (behaving very much like Einstein's Cosmological Constant) is the best fit to the apparent renewal of acceleration. Their take on the possible models for the Universe's future is summarized in this diagram:

Possible fates of the Universe.

They conclude that neither the "Big Rip" (wide dispersal of the Universe's materials as everything cools down) nor the "Big Crunch" (return of deceleration and collapse of the Universe) are likely, but the Universe will continue to expand coherently into the infinite future but observers on Earth (if we survive well into the future) will loose "sight" of the farthest objects near the cosmic horizon.

So, what does this accelerating Universe hypothesis tell us about the type of Universe that this one appears to be. Look at this diagram - in which the four curves are further identified in terms of Omega (Ω) values (see below for definitions):

Possible changes in Universe size throughout cosmic time.

You saw a similar plot (credited to J. Silk) on the preceding page. The one labeled "Accelerating" most closely resembles the Lemaitre Universe in the plot on page 20-9. For the time being, then, the evidence points to this type of spherical, open Universe as representing the one we've got. Comment: Was the Abbe Lemaitre (previous page) guided by Divine Providence, by scientific prescience, or was he just plain lucky?

In the second paper, Cosmological Antigravity (L. Krauss), the presence of a repulsive force (actually, some type of still mysterious energy) pervading all of space (including the so-called empty part that comprises the dominant portion of the Universe's volume) which offsets gravitational attraction can account for the observations and many of their ramifications. This energy is similar in important respects to the Cosmological Constant concept proposed by Albert Einstein more than 70 years ago to explain how a static Universe (the prevailing model at that time) can retain a constant size when gravity is acting to pull mutually on all matter, including the galaxies, such that matter subjected to attractive forces would ultimately collapse inwardly. In his view, there must be a repulsive gravitational force needed to stabilize this static Universe. Einstein later abandoned his idea (saying "It was my biggest mistake.") when evidence for expansion and the Big Bang became almost completely accepted. In retrospect his notion of a Cosmological Constant (which we will refer to by the arabic letter L; in equations containing it, the Greek letter Lambda Λ is used) once more has merit but at the time it was first conceived did not satisfactorily explain a Universe whose expansion characteristics were still poorly known. Like Einstein's Cosmological Constant, this arcane repulsive energy is postulated to be real largely on the basis of necessity - the need to account for the acceleration if the above Supernovae observations are sustained. The time at which this acceleration began to act is currently unknown with precision; it may be a vestige of the original Inflation or may be a later (second) inflation whose cause for inception is still a mystery.

So, it now appears that Einstein's Cosmological Constant has been rehabilitated, but used in a different way. As Einstein conceived of its use (in his static Universe), it was a term added to his field equation for relating his geometry of spacetime to the distribution of energy and matter:

Gμv = 8πGTμv

where G is Newton's Gravitational Constant, T is a Stress-Energy tensor, and μ and v are mass and energy terms. When his Cosmological Constant Λ is added to this fundamental equation, and multiplied by gμv, the spacetime metric tensor, the above equation now incorporates this repulsive force which he decided was needed to counter the collapsing effect of gravity:

Gμv + Λgμv = 8πGTμv

The Einstein Cosmological Constant has been revised to become a new term: ρvacgμv, the energy density of the Universe (the vacuum energy described on this page), that enters the equation on the right side:

Gμv = 8πGTμv + ρvacgμv

The reason for placing the modified ρ to the right is that it now is tied into the energy side of the equation rather than the spacetime term on the left. So, Einstein's Cosmological Constant has changed in its makeup and placement but still represents a force working to overcome gravitational effects.

A positive Λ (L) value generates long term repulsive forces that act as though they produce a negative pressure (-P). Another way to look at this is to refer to it as "antigravity". This negative acceleration means that the (Open) Universe will continue to expand forever without ultimately slowing down. L is associated with another parameter, Ω (Greek letter Omega). Omega is the ratio of the density of matter/energy in the Universe to the amount actually needed to produce a Flat Universe (Ω = 1); it is also the ratio of gravitational energy to the kinetic energy of all matter/energy extant. Although a Flat Universe seemingly fits many observations (but may be illusory in that we are only observing a perhaps small part of an infinite Universe which appears to be flat - like a tiny area on a large sphere or on a hyperbolic saddle), when an inventory of all matter and conventional energy throughout the Universe is estimated, the amounts fall far short of that needed to achieve the true flat state. In fact, the value of Omega, as suggested from Supernova observations, seems to be less than 1, which corresponds to expansion following hyperbolic geometery. (L [Λ] itself is finite in this model in contradistinction to the value in the Table on page 20-9). If this holds up, it becomes necessary to account for this lower Omega, since matter/energy now identified falls way short of the required amounts. Thus enters the present day equivalent of Einstein's L, that is, some form of energy whose characteristics are only crudely known and existence is yet to be proved.

This is the energy of empty space (the so-called vacuum), a seeming oxymoron in that space is then not really empty if it contains energy; see this footnote (*) for comments on the types of vacuums and voids referred to in Cosmology. The type believed to account for all possibilities is the "false vacuum", which is characterized by some form(s) of energy. Quantum theory holds that zero energy is not possible in any part of space, even after electromagnetic, thermal, nuclear, and other forms of energy are completely removed. The energy content is distributed within a constant vacuum density (new energy is just balanced by increased spatial volume) and this energy acts to offset gravity. Quantum theory suggests several of its characteristics, which are the subject of the research field of quantum cosmology and its subset, cosmongony, (the study of cosmic origins and history). This virtual vacuum fluctuation energy is a (maybe "the") form of Dark Energy, i.e., neither emitting or absorbing light - which means that it does not give off any radiation that can be detected. The energy is inert in that it doesn't react per se to change matter even though it influences it by causing repulsion. Vacuum energy (also called Zero-Point Energy) is brought into play by exceedingly brief quantum fluctuations that create virtual particles and corresponding anti-particles that develop momentarily - and seemingly spontaneously - in the apparent vacuum of space and then mutually annihilate, releasing energy in a form that drives the acceleration by a repulsive force. This phenomenon, in which particles seem to be created from "nothing" (the Aristotelian concept of 'potency' - something has the potential of coming into existence - seems a good analog drawn from metaphysical thinking.), can be conceptually explained from principles of quantum physics.

The basis for the conclusion that the vacuum of the Universe is not truly empty of everything is the Heisenberg Uncertainty Principle. Thus, consider any finite portion of this vacuum; it can be totally devoid of any form of matter. But it contains at any instant of time the above Vacuum Energy, in what theory says is a very large quantity (the actual amounts are presently unknown). The Uncertainty Principle requires that statistically the probability is not zero that from this energy a particle-(antiparticle) pair can emerge at any instant and will likely last momentarily before annihilation. Thus, this vacuum unit contains virtual energy that has a finite potential to transform into a virtual particle. But, only rarely does a particle or antiparticle survive. It is still speculative, but with growing favor, that the end product of such a virtual fluctuation appearance is the form of energy now included in the concept of Dark Energy (it may comprise the entire amount of D.E. or other forms may also be present). Based on its influence on matter, particularly in the galaxies, best estimates of the quantity of Dark Energy in the vacuum are around 10-26 kilograms per cubic meter, equivalent to that available in only a few hundred Hydrogen atoms.

That virtual particles exist has been verified by the Casimir effect. Consider this experimental setup:

Left: Apparatus used in producing the Casimir effect; Right: diagram depicting the 'quashing' of virtual particle wavelengths.

A pair of metal Casimir plates are place in close-spaced proximity. The entire apparatus is enclosed in a near-perfect vacuum. Virtual particles are assumed to pop in and out of the enclosed space. These have characteristic wavelengths. Those between the plates cannot function or propagate because the plate separation is less than the wavelengths of the particles. Those particles outside are not thusly restricted so that they can strike the outer plates producing a weak but measurable pressure. This results in an imbalance of forces that causes the plates to push together. Since no other sources of this attractive force can be accounted for, the conclusion is a verification that virtual particles were present and therefore exist.

As was stated on page 20-1, these virtual particles (quantum fluctuations) are good candidates for the condition that led to the Big Bang. In this concept, a virtual particle of near-infinite density became unstable and "exploded" into the Big Bang event. A thought experiment leads to this notion of two alternatives: 1) there was a pre-Big Bang "void" (see page 20-1 for definition) that contained countless numbers of virtual particles, many being stable indefinitely, some undergoing changes that did not involve Big Bangs, and one (or more, the Multiverse hypothesis, covered later in this Section) that underwent a Big Bang that produced the Universe humans inhabit, or 2) there was a truly empty void (no space or time characteristics) in which a unique virtual particle, the singularity, was "created" and then "detonated" into the present Universe. That Universe contains the countless numbers of virtual particles that come and go without any being involved in another Big Bang (at least so far). Quantum physics allows for the existence of virtual particles in either of these two cases, but does not clearly explain how and why they came to be. (Even metaphysics fails to provide an adequate explanation.)

This much is clear: Near-empty space is continually invaded by myriads of these individual (virtual) particles that have only fleeting existence but the process continues constantly as long as space exists and releases vast quantities of the above-mentioned, as yet undetermined, form of energy that powers the acceleration. The value of L that influences this process may be constant (or could vary - as yet undetermined). In the early days of the Universe matter/energy density was very high but has continued to diminish with expansion. A few billion years ago, its value dropped below the (constant?) energy density associated with L, so that now its repulsive force is taking command and is causing expansion to speed up, an effect implied by the fainter Supernovae observations.


The following four paragraphs attributed to D.A. Stenger (1996) have been taken off the Internet and added here as supplementary information:

In General Relativity, spacetime can be empty of matter or radiation and still contain energy stored in its curvature. Uncaused, random quantum fluctuations in a flat, empty, featureless spacetime can produce local regions with positive or negative curvature. This is called the "spacetime foam" and the regions are called "bubbles of false vacuum" (these describe the infinitesimal volume of space at the outset of a Big Bang). Wherever the curvature is positive a bubble of false vacuum will, according to Einstein's equations, exponentially inflate. In 10-42 seconds the bubble will expand to the size of a proton and the energy within will be sufficient to produce all the mass of the universe.

The bubbles start out with no matter, radiation, or force fields and maximum entropy. They contain energy in their curvature, and so are a "false vacuum." As they expand, the energy within increases exponentially. This does not violate energy conservation since the false vacuum has a negative pressure so the expanding bubble does work on itself.

The forces and particles that appear are more-or-less random, governed only by symmetry principles (like the conservation principles of energy and momentum) that are also not the product of design but exactly what one has in the absence of design.

As the bubble universe expands, a kind of friction occurs in which energy is converted into particles. The temperature then drops and a series of spontaneous symmetry breaking processes occurs, as in a magnet cooled below the Curie point and a essentially random structure of the particles and forces appears. Inflation stops and we move into the more familiar Big Bang.


This concept of the Universe experiencing ever decreasing density (the result of enlarging volume) of Ordinary Matter/Dark Matter with time while the repulsive energy (that part of Dark Energy equivalent to the Cosmological Constant or Quintessence [see below], or some similar form yet to be defined and quantified) is increasing, is presented in this simple generalized plot (Note: it is based on a Universe age of around 12 billion years [too young by about 1.7 b.y,]). The important feature is the notion of the two curves crossing sometime in the past with the Cosmological Constant energy now in greater amount than the matter curve.

Plot of densities of matter and the Cosmological Constant as a function of time.

Verifying the existence of Dark Energy in space itself and determining its nature (properties) and origin is now near the top of the list of priorities for further observations and theoretical explanations by cosmologists and astronomers. Dark Energy may be the most fundamental and extensive of all physical components of the Universe. As shown on page 20-9, it may be, in reality, the very ingredient that makes up the estimated 73% of the total mass/energy of the Universe. It, as has been shown, determines the expansion rate of the Universe and thus its ultimate fate. Despite this importance, its detection so far has proved elusive but future experiments offer promise for its belated "verification", followed by plausible interpretation(s) of its significance.

We can restate a few logical inferences about Dark Energy that may eventually be substantiated, modified, or discarded. First, Dark Energy is presumably the most fundamental physical "thing" in our Universe, and perhaps in the "void" that exists beyond this Universe's ever expanding outer boundary. Second, following the Einstein equivalency equation E = mc2, Dark Energy can itself be converted into Dark Matter under the proper circumstances, and this may be reversible. Third, Ordinary Matter bears a still unclarified relation with Dark Matter/Energy but after the Big Bang Baryons and other forms of Ordinary Matter came into existence coincidentally with the Dark Matter/Energy that pervades the Universe. Fourth, as covered in various paragraphs on this page, the nature of Dark Energy is still undecided; it may be best connected with ideas of vaccum energy, with Inflation, with Quintessence, with negative pressure, or a combination of these or something yet to be conceived. Fifth, Dark Energy seems to have played a key role in forming galaxies and affecting their evolution and dispersal.

The key point made in this paper - and the heart of the new ideas emanating from the evidence for an accelerating Universe - is that there seems to be some preferred value for the Ωs of Dark Energy (L or Λ) and the other forms of matter and energy. As discussed earlier and above, there are various estimates of these Ω values. Here is a plot of ΩΛ versus ΩM

Models of types of Universes based on Omega values for Dark Energy vs all other forms of matter and energy
ΩΛ is the ratio of the energy density contained in the cosmological constant to the critical energy density of the universe. ΩM is the ratio of the energy density contained in matter (Baryonic and Dark) to the critical energy density of the universe. (The critical energy density is that which would yield a spatially flat universe)

The plot above shows boundary conditions for Open and Closed Universes, as well as for Accelerating and Decelerating Universes. In the white field are contours that represent the confidence limits for the statistical locations of the range of Omega values that can be related to Supernova 1a measurements that fall within the plot. In the plot below, the data collected by the High-z Supernova search team have been used to define a field of most likely Omega combinations. The maximum falls close to ΩΛ = 0/7 and ΩM = 0.3 values. This is in agreement with the current estimates (as percentages) for Dark Energy and all other forms of energy/matter.

Similar to the above graph but simplified to show a region where Supernova 1a redshift data indicate the most likely Omega-Lambda; vs Omega-M values.

The presumption in this ΩΛ vs ΩM plot is: ΩΛ plus (+) ΩM = 1, which is the case for a Flat Universe, the model now most in favor.

This second paper included an elegant version of the graphs shown above - the Krause diagram. To be readable, it must be entered at a large size, which will slow your downloading. So, we have placed it as an option to select by clicking on page 20-10a. Read the comments on that page which also apply to the above graphs. The notion of a Cyclic Universe is also expounded on the page

Having become familiar with these new models for an accelerating Universe, it is now propitious to show the following plot.

Redshift versus cosmic distance plots using Supernova 1a data.

This figure contains data obtained by the High-z Supernova Search Team and the Supernova Cosmology Project. The upper plot shows Supernova redshift values versus distance in Megaparsecs to the SN 1a events monitored as described above. Four different sets of plots based on values for ΩΛ and ΩM are evident in the upper right where they diverge slightly from one another (at lower z values they are superimposed). This narrow spread is hard to see at this scale. The curves diverge when the log of the relative distances forms the ordinate. Some indication of the acceleration trend is evident but the plot points and their statistical spreads are not sufficiently separated to be decisive. More and improved observations, especially farther out in space, will be needed to convincingly confirm the Accelerating Universe model. A future satellite, SNAP (Supernova Acceleration Probe), specifically dedicated to SN 1a data collection, has been proposed. Some information about SNAP is given in this figure:

The SNAP program.

The third paper, Inflation in a Low Density Universe (Bucher and Spergel) presents an alternative to postulating an L-related repulsive energy. It reaches a similar conclusion that the Universe is Open and space is hyperbolic in its pattern of expansion. The model presented, the Open Inflationary Theory, is a variant of the Standard Inflation model of Guth and others. In the standard model, the inflation is related to the Inflaton Field (IF), which refers to particles that exist during inflation that result from quantum field oscillations (much like the virtual particles described above). Imagine a curve shaped like a broad, open U within an X-Y plot in which the vertical describes the potential energy changes and the horizontal changes in IF. (Note: In other, similar diagrams, the parameter for the vertical axis is called energy density and the horizontal axis is labelled Higgs Field (in which the inflaton is equivalent to the [possibly now detected] Higgs Boson).

Schematic diagram depicting one suggested behavior for a ball representing the very early Universe moves in a Higgs Field as it goes through the Inflation process.

At the onset of inflation, the IF moves down the curve towards a minimum. This process can take place within the infinite entity that is conceptualized to be everything (a continuum without bounds) within which one to many individual Universes can come into existence. In the Open Inflationary model, there is a warp in the curve near high potential energies in which the IF can be trapped, as though in a local trough, the so-called "false minimum" or "false vacuum". Many such "troughs" exist - each a potential Universe. When certain quantum processes occur, the IF state may "tunnel" out of this trough and proceed down the regular U surface towards the minimum. Each time this happens, a true inflation occurs and a Big Bang ensues (note: Big Bangs only describe the growth of Universes, not the cause of the conditions that existed before inception). The IF, by its nature, imparts an antigravity force which leads to expansion. This process can occur anywhere within the continuum and not simultaneously, so that numerous universes (multiverses) can form and growth, much like the multitudes of bubbles in water approaching the boiling point.

In a January 2001 paper in Scientific American, J.R. Ostriker and P.J. Steinhardt describe the Quintessential Universe in which the vacuum energy is modified so that it can interact with matter. It produces a dynamic quantum field, described by their term "quintessence, in which the presence of matter reduces the rate of acceleration below magnitudes predicted for cosmological constant expansion. In the long term future, say 30 to 50 billion years from the present, the quintessential expansion may either continue to increase or may be slowed down and even reversed by the influence of matter, even if that is then dispersed into a much greater volume.

One of the "curiosities" still intact after the proposal in the 1990s of expansion acceleration is that it seems to be relatively recent in the cosmological time scale. In other words, this acceleration has only in the last 5 billion years or less (initial estimates of 7 billion years have now been lowered to more recent age) reasserted itself (amounting to a "second coming", following the first abrupt acceleration or Inflation in the early moments of the initial Big Bang), i.e, it has increasingly acted to counter the effects of Dark Matter gravity and may become even more dominant in the future.

One might logically surmise that this repulsive force associated with Dark Energy (whose density is thought to be constant over Universe time) has existed all along but was less than the matter-related gravitational forces when the Universe was smaller during its first half of existence. Now, with further expansion, and greater separation of galaxies and reduced mass density, the repulsive force began about 5 to 7 billion years ago to exceed the attractive force (gravity) leading to a renewal of acceleration such that the pushing apart overtakes the earlier slowing of expansion when gravity exerted more effect on the Universe's growth. Although not yet proved, it seems that this repulsive force from Dark Energy will become increasingly stronger relative to gravity so that acceleration of expansion will be ever greater (this implies that the galaxies will eventually become very far apart and increasingly dispersed).

Another consequence of this incremental expansion is that it causes the age of the Universe to increase insofar as it is determined from the Hubble constant. This offsets the anomaly first reported by investigators using the HST to refine the value of HO in which their best estimate at first was 12 billion years - a value contradicted by independent estimates of the ages of the farthest galaxies seen so far as 13-14 or more billion years and by the ages of stars, some of which give evidence of being older than 12 billion l.y. But, recalculating Hubble ages with the acceleration parameter included yields ages from 14-15 billion years, which removes the discrepancy inherant to the star age anomaly.

As the consequences of an inflation-deceleration-acceleration Universe gained more widespread favor, workers considering this reached the conclusion that the two principal components of the Universe - Dark Matter and Dark Energy - were the principal players in the observed expansion history. Thus the small amount of Ordinary Matter (~4%) plays an insignficant (but essential) role. This diagram summarizes the current thinking on this "new" Universe:

From Riess and Turner, Scientific American, February, 2004

Thus, the decreasing Dark Matter (red) and increasing Dark Energy (yellow) curves cross about 4 billion years ago, at a redshift value when the Universe was about 4/5ths its present size. Thereafter (to the right) the Dark Energy (in terms of Energy Density) has continued to increase relative to decreasing Dark Matter (again, relative to Energy Density since it is likely that the equivalence of matter and energy [in the Einstein sense] noted for ordinary energy and matter holds for the always dominant Dark forms). .

To sum these preceding paragraphs, recent evidence now suggests an Open Universe that tends towards the Flat type at one observational scale but may in fact be expanding infinitely in the hyperbolic space mode when envisioned at a greater scale. Both special forms of Inflation and the possible existence of a great quantity of repulsive energy may be involved. Unless huge amounts of matter/energy, having sufficient attractive power to more than counterbalance the Dark Matter expansion, are discovered in the future, the Open model - with space increasing infinitely - is most likely to be the favored scheme. If the constituents of the Universe survive forever in some form (see below), then as they move ever outward, they will always be creating new space (that which holds particles) out of the Void, in perpetuity. One consequence of the Accelerating Universe model is that it produces a density that almost exactly matches that predicted from the earlier Inflation mode. Track down the January 1999 and January 2001 issues of Scientific American to improve your insights and understanding of these complex ideas. Also, consider this general diagram that summarizes some of the above ideas.

World cone graphic illustrating the Accelerating Universe model.

Espousals of competing hypotheses such as above are a hallmark of scientific inquiry. As new evidence rolls in and specialists re-interpret existing evidence, major challenges to the prevailing theories arise; many die away but sometimes older ideas are abandoned and strikingly new concepts emerge. This has been especially true for Cosmology. With the launch of powerful, versatile new space observatories and much better ground-based telescopes, the wealth of new data over the last 20 or so years has prompted refinements of earlier models and explanations or their overthrow and advancement of better substantiated ones. This is true for Dark Energy. As of the middle of 2003, the notion of Dark Energy seemed plausible. Then, in December of 2003 a group of astronomers from ESA's Space Research and Technology Centre presented results of their interpretation of data from XMM-Newton, an X-ray observing spacecraft. The gist of their argument is based on studies of a group of galaxies about 10 billion light years from Earth. These appear to be giving out much more X-radiation than galaxies do today. If this is verified by further observations, the implication is that the early Universe was much denser than hitherto surmised. Also, their interpretation is that there were fewer galaxies then, and that all galaxies seem to continue growing by conversion of Dark Matter into Ordinary Matter. The higher densities during the first part of cosmological history run counter to the "concordance" model that requires decreasing Dark Energy and a lower average density. Instead, much of that fundamental component of the Universe could have been Dark Matter, one consequence being that Dark Energy had "condensed". However, this new hypothesis so far does not explain the apparent acceleration of the present Universe.

These new interpretations (and speculations) described above have also turned attention to the nature and role of gravity in the expansion/deceleration-acceleration history of the Universe. One of the aforementioned articles in the February 2004 issue of Scientific American, "Out of the Darkness" by Geori Dvali, reviews the latest thinking on gravity and its involvement in the story of this history. We review the main ideas in the paragraphs below:

His prime thesis is that there may be two kinds of gravity: the conventional attractive mode (+) with which we are familiar and a repulsive form (-), possibly being the explanation for the Cosmological Constant or the Quintessence modes described above, but in Dvali's proposals of a different nature described in this and the next paragraph. The argument is made in the article that the gravity we observe at Earth and even galaxy scales is of the + variety whereas at much larger scales the - variety takes over. Thus gravity does not behave the same when the entire Universe is considered as a unit. This repulsive gravity can be deduced, and its properties predicted, when matter and energy are examined at quantum scales - this is the so-called quantum gravity concept that many physicists feel will tie all particles and forces together in the Theory of Everything looked at several times in this Section. This implies that the standard Laws of Physics break down, or function differently, at the largest scales.

Dvaili draws upon Superstring Theory and the concept of the (Mem)Brane we were introduced to on page 20-1 to account for the dual behavior of gravity. He examines the possibility that the Graviton, which provides the force particle that accounts for attractive gravity as it flows between particles, is capable of moving in various ways within and into/out of the Brane (see pages 20-1a and 20-1b). Gravity within the Brane (a flat but three-dimensional Universe) operates over galactic scales as an attracting force and follows the inverse square law. In four dimensions, gravity follows an inverse cube law whose effects do not reduce its ability to hold material and energy together when galaxies or galaxy groups define the scale. But when the postulation of multiple dimensions beyond four is introduced, these extra dimensions can accommodate the participation of gravity in a way that allows its to induce a repulsive force when the large scale Universe is considered.

Gravity becomes weakened when the Gravitons are able to move outside the Brane (the flat Universe). They escape into the extra dimensions, which can be of more than one proposed type: 1) curling (into closed loops) within the scales near quantum levels (Standard model); 2) infinite and curled (Randall-Sundrum model); 3) infinite and straight (Dvali model). Gravitons, according to this hypothesis, are the only particles capable of "leaking" out of Brane space into truly empty space beyond. Those (less energetic) Gravitons that have low momentum (yielding long wavelengths) are the ones now escaping (some can return to the Brane but the net effect is a loss). This loss, on a grand scale, is reducing the overall ability of Gravitons to decelerate the Universe, and at some time in the past the influence of the overall attractive effect of gravity, now weakened as the net strength of the supply of Gravitons is diminished, has affected the Universe's behavior according to Einstein's General Relativity in the sense that the energy-density of the Universe modifies space curvature. The changing curvature is expressed as a geometry that gradually favors acceleration overcoming the restraining force of conventional gravity, which becomes unable to pull back on both matter and energy (dark and luminous). The Dvali mechanism obviates the necessity to presume the repulsive force implied by either the Cosmological Constant or Quintessence; there are in fact several other models that lead to acceleration (see Dvali's article).

There are astronomers who have questioned many of these ideas concerning Dark Matter, Dark Energy, renewed expansion, etc. As is so true of all Science, each new idea begets several variations or even alternatives. Here is an excellent example: Extremely small but significant variations in the Cosmic Background Radiation temperatures are cited as evidence for these invisible (and still undetected) forms of matter/energy. Their distribution is said to control the regions where galaxies first formed in the early, much smaller Universe. But Dr. Thomas Shanks (University of Durham) and colleagues have studied this distribution in terms of where major galactic clusters occur in the Cosmos and find a positive correlation. He cites a mechanism known as the Inverse Compton Scattering Effect in which the hot gases within the clusters will energize CBR. This has the effect of shortening the wavelengths of the microwave radiation, as shown in this diagram:

The inverse Compton scattering effect on CBR, such that the deviations in WMAP temperatures may actually relate to passage of microwaves through galactic gas.

The implication is that the anomalous irregularities in the CBR are actually related to modifications imposed by interaction with galactic gases as faraway radiation passes through high concentrations of gases in galactic clusters.

We close this provocative subsection with several ideas presented in a 2006 paper by Biermann and Kusenko in Physical Review Letters which is concerned mainly with Dark Matter. They point out that the first hints of such Matter are traceable to the 1930s as an explanation of gravitational effects on Ordinary Matter. This latter possesses insufficient mass to provide the needed forces to hold galaxies and the entire Universe intact. Dark Matter seemingly possesses no electric charge and does not react to the strong and weak forces involved in binding atomic nuclei into the atoms of Ordinary Matter. What is the true nature of Dark Matter remains a mystery but Biermann and Kusenko speculate it is largely (perhaps entirely) composed of sterile neutrinos (whose existence is postulated from Supernova studies), which differ from ordinary neutrinos in not being influenced by the fundamental weak force. And, the proof that neutrinos have a tiny but real mass, plus the fact that they are so abundant, further points to their ability to influence gravity. If Biermann and Kusenko's idea proves correct, or at least is part of the story, then neutrinos might lay claim to being the most fundamental of the known (and hypothesized) particles. The writer (NMS) speculates that these neutrinos in some way also participate in Dark Energy, as such being in a form that is a logical extension of Einstein's argument for the interconvertibility of matter and energy. It is ironic that the most abundant category postulated as making up the physical Universe - the 74% that is Dark Energy - has to date proved so elusive and intractable to direct sensing that cosmologists have yet to confirm its nature and, even more rudimentarily, its very existence. That it ought to exist is deduced from models in which Dark Energy's presence helps to explain the observations accruing from the spurt of knowledge that is driving modern Cosmological thought.

The paragraphs above may seem a bit rambling, with "uncertainty" being the watchword. Is there anything really firm that can be said about Dark Energy? A press release on January 18, 2007 about research done by a Danish group of astronomers does seem to clarify several key points. This group has been analyzing Supernova distance/rate of expansion data, as have others. The group now takes a position that, if valid, does enlighten. First, they confirm the expansion rates being reported. Second, they find that there is no need for "fancy theories", that the rate predicted by Einstein's Cosmological Constant is consistent with recent observations. Third, Einstein's notion of Energy of a Vacuum is probably closest to the truth as of now. But, of course Vacuum energy seems an oxymoron. Unless one assumes that there is no true (total) vacuum, such that no part of space is actually "nothing". But what is the source of Vacuum energy remains a mystery for the moment (NMS: virtual particles, but what are they?). Stay tuned!