According to legend, the first mathematical formulation of what we might today call a law of nature dates back to an Ionian named Pythagoras (ca. 580 BC-ca. 490 bc), famous for the theorem named after him: that the square of the hypotenuse (longest side) of a right triangle equals the sum of the squares of the other two sides. Pythagoras is said to have discovered the numerical relationship between the length of the strings used in musical instruments and the harmonic combinations of the sounds. In today's language we would describe that relationship by saying that the frequency—the number of vibrations per second—of a string vibrating under fixed tension is inversely proportional to the length of the string. From the practical point of view, this explains why bass guitars must have longer strings than ordinary guitars. Pythagoras probably did not it really discover this—he also did not discover the theorem that bears his name—but there is evidence that some relation between string length and pitch was known in his day If so, one could call that simple mathematical formula the first instance of what we now know as theoretical physics.
Apart from the Pythagorean law of strings, the only physical laws known correctly to the ancients were three laws detailed by Archimedes (ca. 287 BC-ca. 212 bc), by far the most eminent physicist of antiquity. In today's terminology, the law of the lever explains that small forces can lift large weights because the lever amplifies a force according to the ratio of the distances from the lever's fulcrum. The law of buoyancy states that any object immersed in a fluid will experience an upward force equal to the weight of the displaced fluid. And the law of reflection asserts that the angle between a beam of light and a mirror is equal to the angle between the mirror and the reflected beam. But Archimedes did not call them laws, nor did he explain them with reference to observation and measurement. Instead he treated them as if they were purely mathematical theorems, in an axiomatic system much like the one Euclid created for geometry.
As the Ionian influence spread, there appeared others who saw that the universe possesses an internal order, one that could be understood through observation and reason. Anaximander (ca. 6io BC-ca. 546 bc), a friend and possibly a student of Thales, argued that since human infants are helpless at birth, if the first human had somehow appeared on earth as an infant, it would not have survived. In what may have been humanity's first inkling of evolution, people, Anaximander reasoned, must therefore have evolved from other animals whose young are hardier. In Sicily, Empedocles (ca. 490 BC-ca. 430 bc) observed the use of an instrument called a clepsydra. Sometimes used as a ladle, it consisted of a sphere with an open neck and small holes in its bottom. When immersed in water it would fill, and if the open neck was then covered, the clepsydra could be lifted out without the water in it falling through the holes. Empedocles noticed that if you cover the neck before you immerse it, a clepsydra does not fill. He reasoned that something invisible must be preventing the water from entering the sphere through the holes—he had discovered the material substance we call air.
Around the same time Democritus (ca 460 BC-ca. 370 bc). from an Ionian colony in northern Greece, pondered what happened when you break or cut an object into pieces. He argued that you ought not to be able to continue the process indefinitely. Instead he postulated that everything, including all living beings. is made of fundamental particles that cannot be cut or broken into parts. He named these ultimate particles atoms, from the Greek adjective meaning "uncuttable." Democritus believed that every material phenomenon is a product of the collision of atoms. In his view, dubbed atomism, all atoms move around in space, and, unless disturbed, move forward indefinitely. Today that idea is called the law of inertia.
The revolutionary idea that we are but ordinary inhabitants of the universe, not special beings distinguished by existing at its center, was first championed by Aristarchus (ca. 310 BC-ca. 230 BC), one of the last of the Ionian scientists. Only one of his calculations survives, a complex geometric analysis of careful observations he made of the size of the earth's shadow on the moon during a lunar eclipse. He concluded from his data that the sun must be much larger than the earth. Perhaps inspired by the idea that tiny objects ought to orbit mammoth ones and not the other way around, he became the first person to argue that the earth is not the center of our planetary system, but rather that it and the other planets orbit the much larger sun. It is a small step from the realization that the earth is just another planet to the idea that our sun is nothing special either. Aristarchus suspected that this was the case and believed that the stars we see in the night sky are actually nothing more than distant suns.
The Ionians were but one of many schools of ancient Greek philosophy, each with different and often contradictory traditions. Unfortunately, the Ionians' view of nature—that it can be explained through general laws and reduced to a simple set of principles—exerted a powerful influence for only a few centuries. One reason is that Ionian theories often seemed to have no place for the notion of free will or purpose, or the concept that gods intervene in the workings of the world. These were startling omissions as profoundly unsettling to many Greek thinkers as they are to many people today. The philosopher Epicurus (341 BC-270 bc), for example, opposed atomism on the grounds that it is "better to follow the myths about the gods than to become a 'slave' to the destiny of natural philosophers." Aristotle too rejected the concept of atoms because he could not accept that human beings were composed of soulless, inanimate objects. The Ionian idea that the universe is not human-centered was a milestone in our understanding of the cosmos, but it was an idea that would b dropped and not picked up again, or commonly accepted, until Galileo, almost twenty centuries later.
Our modern understanding of the term "law of nature" is an issue philosophers argue at length, and it is a more subde question than one may at first think. For example, the philosopher John W. Carroll compared the statement "All gold spheres are less than a mile in diameter" to a statement like "All uranium-23 spheres are less than a mile in diameter." Our observations of the world tell us that there are no gold spheres larger than a mile wide, and we can be pretty confident there never will be. Still, we have no reason to believe that there couldn't be one, and so the statement is not considered a law. On the other hand, the statement "All uranium-235 spheres are less than a mile in diameter" could be thought of as a law of nature because, according to what we know about nuclear physics, once a sphere of uranium-235 grew to a diameter greater than about six inches, it would demolish itself in a nuclear explosion. Hence we can be sure that suet spheres do not exist. (Nor would it be a good idea to try to make one!) This distinction matters because it illustrates that not all generalizations we observe can be thought of as laws of nature. and that most laws of nature exist as part of a larger, interconnected system of laws.
While conceding that human behavior is indeed determined by the laws of nature, it also seems reasonable to conclude that the outcome is determined in such a complicated way and with so many variables as to make it impossible in practice to predict. For that one would need a knowledge of the initial state of each of the thousand trillion trillion molecules in the human body and to solve something like that number of equations. That would take a few billion years, which would be a bit late to duck when the person opposite aimed a blow.
because it is so impractical to use the underlying physical laws to predict human behavior, we adopt what is called an effective theory. In physics, an effective theory is a framework created to model certain observed phenomena without describing in detail all of the underlying processes. For example, we cannot solve exactly the equations governing the gravitational interactions of every atom in a person's body with every atom in the earth. But for all practical purposes the gravitational force between a person and the earth can be described in terms of just a few numbers, such as the person's total mass. Similarly, we cannot solve the equations governing the behavior of complex atoms and molecules, but we have developed an effective theory called chemistry that provides an adequate explanation of how atoms and molecules behave in chemical reactions without accounting tor every detail of the interactions. In the case of people, since we cannot solve the equations that determine our behavior, we use the effective theory that people have free will. The study of our will, and of the behavior that arises from it, is the science of psychology. Economics is also an effective theory, based on the notion of free will plus the assumption that people evaluate their possible alternative courses of action and choose the best. That effective theory is only moderately successful in predicting behavior because, as we all mow, decisions are often not rational or are based on a defective analysis of the consequences of the choice. That is why the world is in such a mess.
We make models in science, but we also make them in everyday life. Model-dependent realism applies not only to scientific models but also to the conscious and subconscious mental models we all create in order to interpret and understand the everyday world. There is no way to remove the observer—us—from our perception of the world, which is created through our sensory processing and through the way we think and reason. Our perception—and hence the observations upon which our theories are based—is is not direct, but rather is shaped by a kind of lens, the interpretive structure of our human brains.
Model-dependent realism corresponds to the way we perceive objects. In vision, one's brain receives a series of signals down the optic nerve. Those signals do not constitute the sort of image you would accept on your television. There is a blind spot where the optic nerve attaches to the retina, and the only part of your field of vision with good resolution is a narrow area of about i degree of visual angle around the retina's center, an area the width of your thumb when held at arm's length. And so the raw data sent to the brain are like a badly pixilated picture with a hole in it. Fortunately, the human brain processes that data, combining the input from both eyes, filling in gaps on the assumption that the visual properties of neighboring locations are similar and interpolating. Moreover, it reads a two-dimensional array of data from the retina and creates from it the impression of three-dimensional space. The brain, in other words, builds a mental picture or model.
The brain is so good at model building that if people are fitted with glasses that turn the images in their eyes upside down, their brains, after a time, change the model so that they again see things the right way up. If the glasses are then removed, they see the world upside down for a while, then again adapt. This shows that what one means when one says "I see a chair" is merely that one has used the light scattered by the chair to build a mental image or model of the chair. If the model is upside down, with luck one's brain will correct it before one tries to sit on the chair.
Imagine, say, that you wanted to travel from New York to Madrid, two cities that are at almost the same latitude. If the earth were flat, the shortest route would be to head straight east. If you did that, you would arrive in Madrid after traveling 3,707 miles. But due to the earth's curvature, there is a path that on a flat map looks curved and hence longer, but which is actually shorter. You can get there in 3,605 miles if you follow the great-circle route. which is to first head northeast, then gradually turn east, and then southeast. The difference in distance between the two routes is due to the earth's curvature, and a sign of its non-Euclidean geometry. Airlines know this, and arrange for their pilots to follow great-circle routes whenever practical.
1. Gravity. This is the weakest of the four, but it is a long-range force and acts on everything in the universe as an attraction. This means that for large bodies the gravitational forces all add up and can dominate over all other forces.
2. Electromagnetism. This is also long-range and is much stronger than gravity, but it acts only on particles with an electric charge. being repulsive between charges of the same sign and attractive between charges of the opposite sign. This means the electric forces between large bodies cancel each other out, but on the scales of atoms and molecules they dominate. Electromagnetic forces are responsible for all of chemistry and biology.
3. Weak nuclear force. This causes radioactivity and plays a vital role in the formation of the elements in stars and the early universe. We don't, however, come into contact with this force in our everyday lives.
4. Strong nuclear force. This force holds together the protons and neutrons inside the nucleus of an atom. It also holds together the protons and neutrons themselves, which is necessary because they are made of still timer particles, the quarks we mentioned in Chapter 3. The strong force is the energy source for the sun i and nuclear power, but, as with the weak force, we don't have direct contact with it.
Feynman's graphical method provides a way of visualizing each term in the sum over histories. Those pictures, called Feynman diagrams, are one of the most important tools of modern physics. In QED the sum over all possible histories can be represented as a sum over Feynman diagrams like those below, which represent some of the ways it is possible for two electrons to scatter off each other through the electromagnetic force. In these diagrams the solid lines represent the electrons and the wavy lines represent photons. Time is understood as progressing from bottom to top. and places where lines join correspond to photons being emitted or absorbed by an electron. Diagram (A) represents the two electrons approaching each other, exchanging a photon, and then continuing on their way. That is the simplest way in which two electrons can interact electromagnetically, but we must consider all possible histories. Hence we must also include diagrams like (B). That diagram also pictures two lines coming in—the approaching electrons—and two lines going out—the scattered ones—but in this diagram the electrons exchange two photons before flying off The diagrams pictured are only a few of the possibilities; in fact, there are an infinite number of diagrams, which must be mathematically accounted for.
Feynman diagrams aren't just a neat way of picturing and categorizing how interactions can occur. Feynman diagrams come with rules that allow you to read off, from the lines and vertices in each diagram, a mathematical expression. The probability, say. that the incoming electrons, with some given initial momentum. win end up flying off with some particular final momentum is then obtained by summing the contributions from each Feynman diagram. That can take some work, because, as we've said, there are an infinite number of them. Moreover, although the incoming and outgoing electrons are assigned a definite energy and momentum, the particles in the closed loops in the interior of the diagram can have any energy and momentum. That is important because in forming the Feynman sum one must sum not only over all diagrams but also over all those values of energy and momentum.
Feynman diagrams provided physicists with enormous help in visualizing and calculating the probabilities of the processes described by QED. But they did not cure one important ailment suffered by the theory: When you add the contributions from the infinite number of different histories, you get an infinite result. (If the successive terms in an infinite sum decrease fast enough, it is possible for the sum to be finite, but that, unfortunately, doesn't happen here.) In particular, when the Feynman diagrams are added up, the answer seems to imply that the electron has an infinite mass and charge. This is absurd, because we can measure the mass and charge and they are finite. To deal with these infinities, a procedure called renormalization was developed.
The process of renormalization involves subtracting quantities that are defined to be infinite and negative in such a way that, with careful mathematical accounting, the sum of the negative infinite values and the positive infinite values that arise in the theory almost cancel out, leaving a small remainder, the finite observed values of mass and charge. These manipulations might sound like the sort of things that get you a flunking grade on a school math exam, and renormalization is indeed, as it sounds, mathematically dubious. One consequence is that the values obtained by this method for the mass and charge of the electron can be any finite number. That has the advantage that physicists may choose the negative infinities in a way that gives the right answer, but the disadvantage that the mass and charge of the electron therefore cannot be predicted from the theory. But once we have fixed the mass and charge of the electron in this manner, we can employ QED to make many other very precise predictions, which all agree extremely closely with observation, so renormalization is one of the essential ingredients of QED. An early triumph of QED, for example, was the correct prediction of the so-called Lamb shift, a small change in the energy of one of the states of the hydrogen atom discovered in 1947.
The success of renormalization in QED encouraged attempts to look for quantum field theories describing the other three. forces of nature. But the division of natural forces into four classes is probably artificial and a consequence of our lack of understanding. People have therefore sought a theory of everything that will unify the four classes into a single law that is compatible with quantum theory. This would be the holy grail of physics.
Whether M-theory exists as a single formulation or only as a network, we do know some of its properties. First, M-theory has eleven space-time dimensions, not ten. String theorists had long suspected that the prediction often dimensions might have to be adjusted, and recent work showed that one dimension had indeed been overlooked. Also, M-theory can contain not just vibrating strings but also point particles, two-dimensional membranes. three-dimensional blobs, and other objects that are more difficult to picture and occupy even more dimensions of space, up to nine. These objects are called p-branes (where p runs from zero to nine).
What about the enormous number of ways to curl up the tiny dimensions. In M-theory those extra space dimensions cannot be curled up in just any way. The mathematics of the theory restricts the manner in which the dimensions of the internal space can be curled. The exact shape of the internal space determines both the values of physical constants, such as the charge of the electron. and the nature of the interactions between elementary particles. In other words, it determines the apparent laws of nature. We say "apparent" because we mean the laws that we observe in our universe—the laws of the four forces, and the parameters such as mass and charge that characterize the elementary particles. But the more fundamental laws are those of M-theory.
The laws of M-theory therefore allow for different universes with different apparent laws, depending on how the internal space is curled. M-theory has solutions that allow for many different internal spaces, perhaps as many as 10^500, which means it allows for 10^500 different universes, each with its own laws. To get an idea low many that is, think about this: If some being could analyze the laws predicted for each of those universes in just one millisecond and had started working on it at the big bang, at present that being would have studied just 10^20 of them. And that's without coffee breaks.
That the universe is expanding was news to Einstein. But the possibility that the galaxies are moving away from each other had been proposed a few years before Hubble's papers on theoretical grounds arising from Einstein's own equations. In 1922, Russian physicist and mathematician Alexander Friedmann investigated what would happen in a model universe based upon two assumptions that greatly simplified the mathematics: that the universe looks identical in every direction, and that it looks that way from every observation point. We know that Friedmann's first assumption is not exactly true—the universe fortunately is not uniform everywhere! If we gaze upward in one direction, we might see the sun; in another, the moon or a colony of migrating vampire bats. But the universe does appear to be roughly the same in every direction when viewed on a scale that is far larger—larger even than the distance between galaxies. It is something like looking down at a forest. If you are close enough, you can make out individual leaves, or at least trees, and the spaces between them. But if you are so high up that if you hold out your thumb it covers a square mile of trees, the forest will appear to be a uniform shade of green. We would say that, on that scale, the forest is uniform.
Based on his assumptions Friedmann was able to discover a solution to Einstein's equations in which the universe expanded in the manner that Hubble would soon discover to be true. In particular, Friedmann's model universe begins with zero size and expands until gravitational attraction slows it down, and eventually causes it to collapse in upon itself (There are, it turns out, two other types of solutions to Einstein's equations that also satisfy the assumptions of Friedmann's model, one corresponding to a universe in which the expansion continues forever, though it does slow a bit, and another to a universe in which the rate of expansion slows toward zero, but never quite reaches it.) Friedmann died a few years after producing this work, and his ideas remained largely unknown until after Hubble's discovery. But in 1927 a professor of physics and Roman Catholic priest named Georges Lemaitre proposed a similar idea: If you trace the history of the universe backward into the past, it gets tinier and tinier until you come upon a creation event—what we now call the big bang.
Not everyone liked the big bang picture. In fact, the term "big bang" was coined in 1949 by Cambridge astrophysicist Fred Hoyle, who believed in a universe that expanded forever, and meant the term as a derisive description. The first direct observations supporting the idea didn't come until 1965, with the discovery that there is a faint background of microwaves throughout space. This cosmic microwave background radiation, or CMBR, is the same as that in your microwave oven, but much less powerful. You can observe the CMBR yourself by tuning your television to an unused channel—a few percent of the snow you see on the screen will be caused by it. The radiation was discovered by accident by two Bell Labs scientists trying to eliminate such static from their microwave antenna. At first they thought the static might be coming from the droppings of pigeons roosting in their apparatus, but it turned out their problem had a more interesting origin—the CMBR is radiation left over from the very hot and dense early universe that would have existed shortly after the big bang. As the universe expanded, it cooled until the radiation became just the faint remnant we now observe. At present these microwaves could heat your food to only about -270 degrees Centigrade —3 degrees above absolute zero, and not very useful for popping corn.
Astronomers have also found other fingerprints supporting the big bang picture of a hot, tiny early universe. For example, during the first minute or so, the universe would have been hotter than the center of a typical star. During that period the entire universe would have acted as a nuclear fusion reactor. The reactions would have ceased when the universe expanded and cooled suficiently. but the theory predicts that this should have left a universe composed mainly of hydrogen, but also about 23 percent helium, with traces of lithium (all heavier elements were made later, inside stars). The calculation is in good accordance with the amounts of helium, hydrogen, and lithium we observe.
Our very existence imposes rules determining from where and at what time it is possible for us to observe the universe. That is, the fact of our being restricts the characteristics of the kind of environment in which we find ourselves. That principle is called the weak anthropic principle. (We'll see shortly why the adjective weak" is attached.) A better term than "anthropic principle" would have been "selection principle," because the principle refers to how our own knowledge of our existence imposes rules that select, out of all the possible environments, only those environments with the characteristics that allow life.
Though it may sound like philosophy, the weak anthropic principle can be used to make scientific predictions. For example, how old is the universe? As we'll soon see, for us to exist the universe must contain elements such as carbon, which are produced by cooking lighter elements inside stars. The carbon must then be scattered through space in a supernova explosion, and eventually condense as part of a planet in a new-generation solar system. In i96i physicist Robert Dicke argued that the process takes about lo billion years, so our being here means that the universe must be at least that old. On the other hand, the universe cannot be much older than lo billion years, since in the far future all the fuel for stars will have been used up, and we require hot stars for our sustenance. Hence the universe must be about lo billion years old. That is not an extremely precise prediction, but it is true—according to current data the big bang occurred about 13.7 billion years ago.
As was the case with the age of the universe, anthropic predictions usually produce a range of values for a given physical parameter rather than pinpointing it precisely. That's because our existence, while it might not require a particular value of some physical parameter, often is dependent on such parameters not varying too far from where we actually find them. We furthermore expect that the actual conditions in our world are typical within the anthropically allowed range. For example, if only modest orbital eccentricities, say between zero and 0.5, will allow life, then an eccentricity of o.i should not surprise us because among all the planets in the universe, a fair percentage probably have orbits with eccentricities that small. But if it turned out that the earth moved in a near-perfect circle, with eccentricity, say, of 0.00000000001, that would make the earth a very special planet indeed, and motivate us to try to explain why we find ourselves living in such an anomalous home. That idea is sometimes called the principle of mediocrity.
'he lucky coincidences pertaining to the shape of planetary or3its, the mass of the sun, and so on are called environmental because they arise from the serendipity of our surroundings and not from a fluke in the fundamental laws of nature. The age of the universe is also an environmental factor, since there are an earlier and a later time in the history of the universe, but we must live in this era because it is the only era conducive to life. Environmental coincidences are easy to understand because ours is only one cosmic habitat among many that exist in the universe, and we obviously must exist in a habitat that supports life.
The weak anthropic principle is not very controversial. But there is a stronger form that we will argue for here, although it is regarded with disdain among some physicists. The strong anthropic principle suggests that the fact that we exist imposes constraints not just on our environment but on the possible form and content of the laws of nature themselves. The idea arose because it is not only the peculiar characteristics of our solar system that seem oddly conducive to the development of human life but also the characteristics of our entire universe, and that is much more difficult to explain.
The tale of how the primordial universe of hydrogen, helium. and a bit of lithium evolved to a universe harboring at least one world with intelligent life like us is a tale of many chapters. As we mentioned earlier, the forces of nature had to be such that heavier elements—especially carbon—could be produced from the primordial elements, and remain stable for at least billions of years. Those heavy elements were formed in the furnaces we call stars, so the forces first had to allow stars and galaxies to form. Those grew from the seeds of tiny inhomogeneities in the early universe. which was almost completely uniform but thankfully contained density variations of about i part in 100,000. However, the existence of stars, and the existence inside those stars of the elements we are made of, is not enough. The dynamics of the stars had to be such that some would eventually explode, and, moreover, explode precisely in a way that could disburse the heavier elements through space. In addition, the laws of nature had to dictate that those remnants could recondense into a new generation of stars. these surrounded by planets incorporating the newly formed heavy elements. Just as certain events on early earth had to occur in order to allow us to develop, so too was each link of this chain necessary for our existence. But in the case of the events resulting in the evolution of the universe, such developments were governed by the balance of the fundamental forces of nature, and it is those whose interplay had to be just right in order for us to exist.
Though one might imagine "living" organisms such as intelligent computers produced from other elements, such as silicon, it is doubtful that life could have spontaneously evolved in the absence of carbon. The reasons for that are technical but have to do with the unique manner in which carbon bonds with other elements. Carbon dioxide, for example, is gaseous at room temperature, and biologically very useful. Since silicon is the element directly below carbon on the periodic table, it has similar chemical properties. However, silicon dioxide, quartz, is far more useful in a rock collection than in an organism's lungs. Still, perhaps lifeforms could evolve that feast on silicon and rhythmically twirl their tails in pools of liquid ammonia. Even that type of exotic life could not evolve from just the primordial elements, for those elements can form only two stable compounds, lithium hydride, which is a colorless crystalline solid, and hydrogen gas, neither of them a compound likely to reproduce or even to fall in love. Also, the fact remains that we are a carbon life-form, and that raises the issue of how carbon, whose nucleus contains six protons, and the other heavy elements in our bodies were created.
The first step occurs when older stars start to accumulate helium, which is produced when two hydrogen nuclei collide and fuse with each other. This fusion is how stars create the energy that warms us. Two helium atoms can in turn collide to form beryllium, an atom whose nucleus contains four protons. Once beryllium is formed, it could in principle fuse with a third helium nucleus to form carbon. But that doesn't happen, because the isotope of beryllium that is formed decays almost immediately back into helium nuclei.
The situation changes when a star starts to run out of hydrogen. When that happens the star's core collapses until its central temperature rises to about lO0 million degrees Kelvin. Under those conditions, nuclei encounter each other so often that some beryllium nuclei collide with a helium nucleus before they have had a chance to decay. Beryllium can then fuse with helium to form an isotope of carbon that is stable. But that carbon is still a long way from forming ordered aggregates of chemical compounds of the type that can enjoy a glass of Bordeaux, juggle flaming bowling pins, or ask questions about the universe. For beings such as humans to exist, the carbon must be moved from inside the star to friendlier neighborhoods. That, as we've said, is accomplished when the star, at the end of its life cycle, explodes as a supernova, expelling carbon and other heavy elements that later condense into a planet.
This process of carbon creation is called the triple alpha process because "alpha particle" is another name for the nucleus of the isotope of helium involved, and because the process requires that three of them (eventually) fuse together. The usual physics predicts that the rate of carbon production via the triple alpha process ought to be quite small. Noting this, in 1952 Hoyle predicted that the sum of the energies of a beryllium nucleus and a helium nucleus must be almost exactly the energy of a certain quantum state of the isotope of carbon formed, a situation called a resonance, which greatly increases the rate of a nuclear reaction. At the time. no such energy level was known, but based on Hoyle's suggestion, William Fowler at Caltech sought and found it, providing important support for Hoyle's views on how complex nuclei were created.
Hoyle wrote, "I do not believe that any scientist who examined the evidence would fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce inside the stars." At the time no one knew enough nuclear physics to understand the magnitude of the serendipity that resulted in these exact physical laws. But in investigating the validity of the strong anthropic principle, in recent years physicists began asking themselves what the universe would have been like if the laws of nature were different. Today we can create computer models that tell us how the rate of the triple alpha reaction depends upon the strength of the fundamental forces of nature. Such calculations show that a change of as little as 0.5 percent in the strength of the strong nuclear force, or 4 percent in the electric force, would destroy either nearly all carbon or all oxygen in every star, and hence the possibility of life as we know it. Change those rules of our universe just a bit, and the conditions for our existence disappear!
If one assumes that a few hundred million years in stable orbit are necessary for planetary life to evolve, the number of space dimensions is also fixed by our existence. That is because, according to the laws of gravity, it is only in three dimensions that stable elliptical orbits are possible. Circular orbits are possible in other dimensions, but those, as Newton feared, are unstable. In any but three dimensions even a small disturbance, such as that produced by the pull of the other planets, would send a planet off its circular orbit and cause it to spiral either into or away from the sun, so we would either burn up or freeze. Also, in more than three dimensions the gravitational force between two bodies would decrease more rapidly than it does in three dimensions. In three dimensions the gravitational force drops to 1/4 of its value if one doubles the distance. In four dimensions it would drop to 1/8, in five dimensions it would drop to 1/16, and so on. As a result, in more than three dimensions the sun would not be able to exist in a stable state with its internal pressure balancing the pull of gravity. It would either fall apart or collapse to form a black hole, either of which could ruin your day. On the atomic scale, the electrical forces would behave in the same way as gravitational forces. That means the electrons in atoms would either escape or spiral into the nucleus. In neither case would atoms as we know them be possible.
If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing. That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system. such as the earth and moon. This negative energy can balance the positive energy needed to create matter, but it's not quite that simple. The negative gravitational energy of the earth, for example, is less than a billionth of the positive energy of the matter particles the earth is made of A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater this negative gravitational energy will be. But before it can become greater than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That's why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can.
Because gravity shapes space and time, it allows space-time to be locally stable but globally unstable. On the scale of the entire universe, the positive energy of the matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, the universe can and will create itself from nothing in the manner described in Chapter 6. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.
Why are the fundamental laws as we have described them.> The ultimate theory must be consistent and must predict finite results for quantities that we can measure. We've seen that there must be a law like gravity, and we saw in Chapter 5 that for a theory of gravity to predict finite quantities, the theory must have what is called supersymmetry between the forces of nature and the matter on which they act. M-theory is the most general supersymmetric theory of gravity. For these reasons M-theory is the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself We must be part of this universe, because there is no other consistent model.
M-theory is the unified theory Einstein was hoping to find. The fact that we human beings—who are ourselves mere collections of fundamental particles of nature—have been able to come this close to an understanding of the laws governing us and our universe is a great triumph. But perhaps the true miracle is that abstract considerations of logic lead to a unique theory that predicts and describes a vast universe hill of the amazing variety that we see. If the theory is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design.
M-theory is not a theory in the usual sense. It is a whole family of different theories, each of which is a good description of observations only in some range of physical situations. It is a bit like a map. As is weU known, one cannot show the whole of the earth's surface on a single map. The usual Mercator projection used for maps of the world makes areas appear larger and larger in the far north and south and doesn't cover the North and South Poles. To faithfully map the entire earth, one has to use a collection of maps, each of which covers a limited region. The maps overlap each other, and where they do, they show the same landscape. M-theory is similar. The different theories in the M-theory family may look very different, but they can all be regarded as aspects of the same underlying theory. They are versions of the theory that are applicable only in limited ranges—for example, when certain quantities such as energy are small. Like the overlapping maps in a Mercator projection, where the ranges of different versions overlap, they predict the same phenomena. But just as there is no fiat map that is a good representation of the earth's entire surface. there is no single theory that is a good representation of observations in all situations.
By examining the model universes we generate when the theories of physics are altered in certain ways, one can study the effect of changes to physical law in a methodical manner. It turns out that it is not only the strengths of the strong nuclear force and the electromagnetic force that are made to order for our existence. Most of the fundamental constants in our theories appear fine tuned in the sense that if they were altered by only modest amounts, the universe would be qualitatively different, and in many cases unsuitable for the development of life. For example, if the other nuclear force, the weak force, were much weaker, in the early universe all the hydrogen in the cosmos would have turned to helium, and hence there would be no normal stars; if it were much stronger, exploding supernovas would not eject their outer envelopes, and hence would fail to seed interstellar space with the heavy elements planets require to foster life. If protons were 0.2 percent heavier, they would decay into neutrons, destabilizing atoms. If the sum of the masses of the types of quark that make up a proton were changed by as little as lo percent, there would be far fewer of the stable atomic nuclei of which we are made; in fact. the summed quark masses seem roughly optimized for the existence of the largest number of stable nuclei.