Among the wonders that science has uncovered about the universe in which we dwell, no subject has caused more fascination and fury than evolution. That is probably because no majestic galaxy or fleeting neutrino has implications that are as personal. Learning about evolution can transform us in a deep way. It shows us our place in the whole splendid and extraordinary panoply of life. It unites us with every living thing on the Earth today and with myriads of creatures long dead. Evolution gives us the true account of our origins, replacing the myths that satisfied us for thousands of years. Some find this deeply frightening, others ineffably thrilling.
By sequencing the DNA of various species and measuring how similar these sequences are, we can reconstruct their evolutionary relationships. This is done by making the entirely reasonable assumption that species having more similar DNA are more closely related—that is, their common ancestors lived more recently. These molecular methods have not produced much change in the pre-DNA-era trees of life: both the visible traits of organisms and their DNA sequences usually give the same information about evolutionary relationships.
The idea of common ancestry leads naturally to powerful and testable predictions about evolution. If we see that birds and reptiles group together based on their features and DNA sequences, we can predict that we should find common ancestors of birds and reptiles in the fossil record. Such predictions have been fulfilled, giving some of the strongest evidence for evolution.
It’s important to realize, though, that there’s a real difference in what you expect to see if organisms were consciously designed rather than if they evolved by natural selection. Natural selection is not a master engineer, but a tinkerer. It doesn’t produce the absolute perfection achievable by a designer starting from scratch, but merely the best it can do with what it has to work with. Mutations for a perfect design may not arise because they are simply too rare. The African rhinoceros, with its two tandemly placed horns, may be better adapted at defending itself and sparring with its brethren than is the Indian rhino, graced with but a single horn (actually, these are not true horns, but compacted hairs). But a mutation producing two horns may simply not have arisen among Indian rhinos. Still, one horn is better than no horns. The Indian rhino is better off than its hornless ancestor, but accidents of genetic history may have led to a less than perfect “design.” And, of course, every instance of a plant or animal that is parasitized or diseased represents a failure to adapt. Likewise for all cases of extinction, which represent well over 99 percent of species that ever lived. (This, by the way, poses an enormous problem for theories of intelligent design. It doesn’t seem so intelligent to design millions of species that are destined to go extinct, and then replace them with other, similar species, most of which will also vanish. ID supporters have never addressed this difficulty.)
Organisms aren’t just at the mercy of the luck of the mutational draw, but are also constrained by their development and evolutionary history. Mutations are changes in traits that already exist; they almost never create brand-new features. This means that evolution must build a new species starting with the design of its ancestors. Evolution is like an architect who cannot design a building from scratch, but must build every new structure by adapting a preexisting building, keeping the structure habitable all the while. This leads to some compromises. We men, for example, would be better off if our testes formed directly outside the body, where the cooler temperature is better for sperm. The testes, however, begin development in the abdomen. When the fetus is six or seven months old, they migrate down into the scrotum through two channels called the inguinal canals, removing them from the damaging heat of the rest of the body. Those canals leave weak spots in the body wall that make men prone to inguinal hernias. These hernias are bad: they can obstruct the intestine, and sometimes caused death in the years before surgery. No intelligent designer would have given us this tortuous testicular journey. We’re stuck with it because we inherited our developmental program for making testes from fish-like ancestors, whose gonads developed, and remained, completely within the abdomen. We begin development with fish-like internal testes, and our testicular descent evolved later, as a clumsy add-on.
The first organisms, simple photosynthetic bacteria, appear in sediments about 3.5 billion years old, only about a billion years after the planet was formed. These single cells were all that occupied the Earth for the next two billion years, after which we see the first simple “eukaryotes”: organisms having true cells with nuclei and chromosomes. Then, around 600 million years ago, a whole gamut of relatively simple but multicelled organisms arise, including worms, jellyfish, and sponges. These groups diversify over the next several million years, with terrestrial plants and tetrapods (four-legged animals, the earliest of which were lobe-finned fish) appearing about 400 million years ago. Earlier groups, of course often persisted: photosynthetic bacteria, sponges, and worms appear in the early fossil record, and are still with us.
Fifty million years later we find the first true amphibians, and after another fifty million years reptiles come along. The first mammals show up around 250 million years ago (arising, as predicted, from reptilian ancestors), and the first birds, also descended from reptiles, show up fifty million years later. After the earliest mammals appear, they, along with insects and land plants, become ever more diverse, and as we approach the shallowest rocks, the fossils increasingly come to resemble living species. Humans are newcomers on the scene—our lineage branches off from that of other primates only about seven million years ago, the merest sliver of evolutionary time. Various imaginative analogies have been used to make this point, and it is worth making again. If the entire course of evolution were compressed into a single year, the earliest bacteria would appear at the end of March, but we wouldn’t see the first human ancestors until 6 a.m. on December 31. The golden age of Greece, about 500 BC, would occur just thirty seconds before midnight.
Because reptiles appear in the fossil record before birds, we can guess that the common ancestor of birds and reptiles was an ancient reptile, and would have looked like one. We now know that this common ancestor was a dinosaur. Its overall appearance would give few clues that it was indeed a “missing link”—that one lineage of descendants would later give rise to all modern birds, and the other to more dinosaurs. Truly birdlike traits, such as wings and a large breastbone for anchoring the flight muscles, would have evolved only later on the branch leading to birds. And as that lineage itself progressed from reptiles to birds, it sprouted off many species having mixtures of reptile-like and bird-like traits. Some of those species went extinct, while others continued evolving into what are now modern birds. It is to these groups of ancient species, the relatives of species near the branch point, that we must look for evidence of common ancestry.
This is where the prediction comes in. If there were lobe-finned fishes but no terrestrial vertebrates 390 million years ago, and clearly terrestrial vertebrates 360 million years ago, where would you expect to find the transitional forms? Somewhere in between. Following this logic, Shubin predicted that if transitional forms existed, their fossils would be found in strata around 375 million years old. Moreover, the rocks would have to be from freshwater rather than marine sediments, because late lobefinned fish and early amphibians both lived in fresh water.
Searching his college geology textbook for a map of exposed freshwater sediments of the right age, Shubin and his colleagues zeroed in on a paleontologically unexplored region of the Canadian Arctic: Ellesmere Island, which sits in the Arctic Ocean north of Canada. And after five long years of fruitless and expensive searching, they finally hit pay dirt: a group of fossil skeletons stacked atop one another in sedimentary rock from an ancient stream. When Shubin first saw the fossil face poking out of the rock, he knew that he had at last found his transitional form. In honor of the local Inuit people and the donor who helped fund the expeditions, the fossil was named Tiktaalik roseae (“Tiktaalik” means “large freshwater fish” in Inuit, and “roseae” is a cryptic reference to the anonymous donor).
Tiktaalik has features that make it a direct link between the earlier lobe-finned fish and the later amphibians. With gills, scales, and fins, it was clearly a fish that lived its life in water. But it also has amphibian-like features. For one thing, its head is flattened like that of a salamander, with the eyes and nostrils on top rather than on the sides of the skull. This suggests that it lived in shallow water and could peer, and probably breathe, above the surface. The fins had become more robust, allowing the animal to flex itself upward to help survey its surroundings. And, like the early amphibians, Tiktaalik has a neck. Fish don’t have necks—their skull joins directly to their shoulders.
Tiktaalik itself was not ready for life ashore. For one thing, it had not yet evolved a limb that would allow it to walk. And it still had internal gills for breathing underwater. So we can make another prediction. Somewhere, in freshwater sediments about 370 million years old, we’ll find a very early land-dweller with reduced gills and limbs a bit sturdier than those of Tiktaalik.
good candidate is the hippopotamus, which, although closely related to terrestrial mammals, is about as aquatic as a land mammal can get. (There are two species, the pygmy hippo and the “regular” hippo, whose scientific name is, appropriately, Hippopotamus amphibius.) Hippos spend most of their time submerged in tropical rivers and swamps, surveying their domain with eyes, noses, and ears that sit atop their head, all of which can be tightly closed underwater. Hippos mate in the water, and their babies, who can swim before they can walk, are born and suckle underwater. Because they are mostly aquatic, hippos have special adaptations for coming ashore to graze: they usually feed at night, and, because they’re prone to sunburn, secrete an oily red fluid that contains a pigment—hipposudoric acid—that acts as a sunscreen and possibly an antibiotic. This has given rise to the myth that hippos sweat blood. Hippos are obviously well adapted to their environment, and it’s not hard to see that if they could find enough food in the water, they might eventually evolve into totally aquatic, whale-like creatures.
It’s been recognized since the seventeenth century that whales and their relatives, the dolphins and porpoises, are mammals. They are warm-blooded, produce live young whom they feed with milk, and have hair around their blowholes. And evidence from whale DNA, as well as vestigial traits like their rudimentary pelvis and hind legs, show that their ancestors lived on land. Whales almost certainly evolved from a species of the artiodactyls: the group of mammals that have an even number of toes, such as camels and pigs. Biologists now believe that the closest living relative of whales is—you guessed it—the hippopotamus, so maybe the hippo-to-whale scenario is not so far-fetched after all.
Indohyus was not the ancestor of whales, but was almost certainly its cousin. But if we go back four million more years, to fifty-two million years ago, we see what might well be that ancestor. It is a fossil skull from a wolf-sized creature called Pakicetus, which is a bit more whale-like than Indohyus, having simpler teeth and more whale-like ears. Pakicetus still looked nothing like a modern whale, so if you had been around to see it, you wouldn’t have guessed that it or its close relatives would give rise to a dramatic evolutionary radiation. Then follows, in rapid order, a series of fossils that become more and more aquatic with time. At fifty million years ago there is the remarkable Ambulocetus (literally, “walking whale”), with an elongated skull and reduced but still robust limbs, limbs that still ended in hooves that reveal its ancestry. It probably spent most of its time in shallow water, and would have waddled awkwardly on land, much like a seal. Rodhocetus (forty-seven million years ago) is even more aquatic. Its nostrils have moved somewhat backward, and it has a more elongated skull. With stout extensions on the backbone to anchor its tail muscles, Rodhocetus must have been a good swimmer, but was handicapped on land by its small pelvis and hindlimbs. The creature certainly spent most if not all of its time at sea. Finally, at forty million years ago, we find the fossils Basilosaurus and Dorudon—clearly fully aquatic mammals, with short necks and blowholes atop the skull. They could not have spent any time on land, for their pelvis and hindlimbs were reduced (the 50-foot Dorudon had legs only 2 feet long) and were unconnected to the rest of the skeleton.
Opponents of evolution always raise the same argument when vestigial traits are cited as evidence for evolution. “The features are not useless,” they say. “They are either useful for something, or we haven’t yet discovered what they’re for.” They claim, in other words, that a trait can’t be vestigial if it still has a function, or a function yet to be found.
But this rejoinder misses the point. Evolutionary theory doesn’t say that vestigial characters have no function. A trait can be vestigial and functional at the same time. It is vestigial not because it’s functionless, but because it no longer performs the function for which it evolved. The wings of an ostrich are useful, but that doesn’t mean that they tell us nothing about evolution. Wouldn’t it be odd if a creator helped an ostrich balance itself by giving it appendages that just happen to look exactly like reduced wings, and which are constructed in exactly the same way as wings used for flying?
Our bodies teem with other remnants of primate ancestry. We have a vestigial tail: the coccyx, or the triangular end of our spine, that’s made of several fused vertebrae hanging below our pelvis. It’s what remains of the long, useful tail of our ancestors. It still has a function (some useful muscles attach to it), but remember that its vestigiality is diagnosed not by its usefulness but because it no longer has the function for which it originally evolved. Tellingly, some humans have a rudimentary tail muscle (the “extensor coccygis”), identical to the one that moves the tails of monkeys and other mammals. It still attaches to our coccyx, but since the bones can’t move, the muscle is useless. You may have one and not even know it.
Other vestigial muscles become apparent in winter, or at horror movies. These are the arrector pili, the tiny muscles that attach to the base of each body hair. When they contract, the hairs stand up, giving us “goose bumps”—so called because of their resemblance to the skin of a plucked goose. Goose bumps and the muscles that make them serve no useful function, at least in humans. In other mammals, however, they raise the fur for insulation when it’s cold, and cause the animal to look larger when it’s making or receiving threats. Think of a cat, whose fur bushes out when it’s cold or angry. Our vestigial goose bumps are produced by exactly the same stimuli—cold or a rush of adrenaline.
And here’s a final example: if you can wiggle your ears, you’re demonstrating evolution. We have three muscles under our scalp that attach to our ears. In most individuals they’re useless, but some people can use them to wiggle their ears. (I am one of the lucky ones, and every year I demonstrate this prowess to my evolution class, much to the students’ amusement.) These are the same muscles used by other animals, like cats and horses, to move their ears around, helping them localize sounds. In those species, moving the ears helps them detect predators, locate their young, and so on. But in humans the muscles are good only for entertainment.
Modern horses, which descend from smaller, five-toed ancestors, show similar atavisms. The fossil record documents the gradual loss of toes over time, so that in modern horses only the middle one—the hoof—remains. It turns out that horse embryos begin development with three toes, which grow at equal rates. Later, however, the middle toe begins to grow faster than the other two, which at birth are left as thin “splint bones” along either side of the leg. (Splint bones are true vestigial features. When they become inflamed, a horse gets “the splints.”) On rare occasions, though, the extra digits continue developing until they become true extra toes, complete with hoofs. Often these atavistic toes don’t touch the ground unless the horse is running. This is exactly what the ancient horse Merychippus looked like fifteen million years ago. Extra-toed horses were once considered supernatural wonders: both Julius Caesar and Alexander the Great were said to have ridden them. And they are wonders of a sort—wonders of evolution—for they clearly show genetic kinship between ancient and modern horses.
Some atavisms can be produced in the laboratory. The most amazing of these is that paragon of rarity, hen’s teeth. In 1980, E. J. Kollar and C. Fisher at the University of Connecticut combined the tissues of two species, grafting the tissue lining the mouth of a chicken embryo on top of tissue from the jaw of a developing mouse. Amazingly, the chicken tissue eventually produced tooth-like structures, some with distinct roots and crowns! Since the underlying mouse tissue alone could not produce teeth, Kollar and Fisher inferred that molecules from the mouse reawakened a dormant developmental program for making teeth in chickens. This meant that chickens had all the right genes for making teeth, but were missing a spark that the mouse tissue was able to provide. Twenty years later, scientists unraveled the molecular biology and showed that Kollar and Fisher’s suggestion was right: birds do indeed have genetic pathways for producing teeth, but don’t make them because a single crucial protein is missing. When that protein is supplied, tooth-like structures form on the bill. You’ll remember that birds evolved from toothed reptiles. They lost those teeth more than sixty million years ago, but clearly still carry some genes for making them—genes that are remnants of their reptilian ancestry.
And the evolutionary prediction that we’ll find pseudogenes has been fulfilled—amply. Virtually every species harbors dead genes, many of them still active in its relatives. This implies that those genes were also active in a common ancestor, and were killed off in some descendants but not in others. Out of about 30,000 genes, for example, we humans carry more than 2,000 pseudogenes. Our genome—and that of other species— are truly well populated graveyards of dead genes.
The most famous human pseudogene is GLO, so called because in other species it produces an enzyme called L-gulono-y-lactone oxidase. This enzyme is used in making vitamin C (ascorbic acid) from the simple sugar glucose. Vitamin C is essential for proper metabolism, and virtually all mammals have the pathway to make it—all, that is, except for primates, fruit bats, and guinea pigs. In these species, vitamin C is obtained directly from their food, and normal diets usually have enough. If we don’t ingest enough vitamin C, we get sick: scurvy was common among fruit-deprived seamen of the nineteenth century. The reason why primates and these few other mammals don’t make their own vitamin C is because they don’t need to. Yet DNAsequencing tells us that primates still carry most of the genetic information needed to make the vitamin.
Only evolution and common ancestry can explain these facts. All mammals inherited a functional copy of the GLO gene. About forty million years ago, in the common ancestor of all primates, a gene that was no longer needed was inactivated by a mutation. All primates inherited that same mutation. After GLO was silenced, other mutations continued to occur in the gene that was no longer expressed. These mutations accumulated over time—they are harmless if they occur in genes that are already dead—and were passed on to descendant species. Since closer relatives share a common ancestor more recently, genes that change in a time-dependent way follow the pattern of common ancestry, leading to DNA sequences more similar in close than in distant relatives. This occurs whether or not a gene is dead. The sequence of YGLO in guinea pigs is so different because it was inactivated independently, in a lineage that had already diverged from that of primates. And YGLO is not unique in showing such patterns: there are many other such pseudogenes.
Another curious tale of dead genes involves our sense of smell, or rather our poor sense of smell, for humans are truly bad sniffers among land mammals. Nevertheless, we can still recognize over 10,000 different odors. How can we accomplish such a feat? Until recently, this was a completely mystery. The answer lies in our DNA—in our many olfactory receptor (OR) genes.
Our own sense of smell comes nowhere close to that of mice. One reason is that we express fewer OR genes—only about 400. But we still carry a total of 800 OR genes, which make up nearly 3 percent of our entire genome. And fully half of these are pseudogenes, permanently inactivated by mutations. The same is true for most other primates. How did this happen? Probably because we primates, who are active during the day, rely more on vision than on smell, and so don’t need to discriminate among so many odors. Unneeded genes eventually get bumped off by mutations. Predictably, primates with color vision, and hence greater discrimination of the environment, have more dead OR genes.
But the most striking example of the evolution—or de-evolution—of OR genes is the dolphin. Dolphins don’t need to detect volatile odors in air, since they do their business underwater, and they have a completely different set of genes for detecting waterborne chemicals. As one might predict, OR genes of dolphins are inactivated. In fact, 80 percent of them are inactivated. Hundreds of them still sit silently in the dolphin genome, mute testimony of evolution. And if you look at the DNA sequences of these dead dolphin genes, you’ll find that they resemble those of land mammals. This makes sense when we realize that dolphins evolved from land mammals whose OR genes became useless when they took to the water. This makes no sense if dolphins were specially created.
Now, we’re not absolutely sure why some species retain much of their evolutionary history during development. The “adding new stuff onto old” principle is just a hypothesis—an explanation for the facts of embryology. It’s hard to prove that it was easier for a developmental program to evolve one way rather than another. But the facts of embryology remain, and make sense only in light of evolution. All vertebrates begin development looking like embryonic fish because we all descended from a fish-like ancestor with a fish-like embryo. We see strange contortions and disappearances of organs, blood vessels, and gill slits because descendants still carry the genes and developmental programs of ancestors. And the sequence of developmental changes also makes sense: at one stage of development mammals have an embryonic circulatory system like that of reptiles; but we don’t see the converse situation. Why? Because mammals descended from early reptiles and not vice versa.
One of my favorite cases of embryological evidence for evolution is the furry human fetus. We are famously known as “naked apes” because, unlike other primates, we don’t have a thick coat of hair. But in fact for one brief period we do—as embryos. Around sixth months after conception, we become completely covered with a fine, downy coat of hair called lanugo. Lanugo is usually shed about a month before birth, when it’s replaced by the more sparsely distributed hair with which we’re born. (Premature infants, however, are sometimes born with lanugo, which soon falls off.) Now, there’s no need for a human embryo to have a transitory coat of hair. After all, it’s a cozy 37 degrees C in the womb. Lanugo can be explained only as a remnant of our primate ancestry: fetal monkeys also develop a coat of hair at about the same stage of development. Their hair, however, doesn’t fall out, but hangs on to become the adult coat. And, like humans, fetal whales also have lanugo, a remnant of when their ancestors lived on land.
A good example of bad design is the flounder, whose popularity as an eating fish (Dover sole, for instance) comes partly from its flatness, which makes it easy to bone. There are actually about 500 species of flatfish— halibut, turbot, flounders, and their kin—all placed in the order Pleuronectiformes. The word means “side-swimmers,” a description that’s the key to their poor design. Flatfish are born as normal-looking fish that swim vertically, with one eye placed on each side of a pancake-shaped body. But a month thereafter, a strange thing happens: one eye begins to move upwards. It migrates over the skull and joins the other eye to form a pair of eyes on one side of the body, either right or left, depending on the species. The skull also changes its shape to promote this movement, and there are changes in the fins and color. In concert, the flatfish tips onto its newly eyeless side, so that both eyes are now on top. It becomes a flat camouflaged bottom-dweller that preys on other fish. When it has to swim, it does so on its side. Flatfish are the world’s most asymmetrical vertebrates; check out a specimen the next time you go to the fish market.
If you wanted to design a flatfish, you wouldn’t do it this way. You’d produce a fish like the skate, which is flat from birth and lies on its belly— not one that has to achieve flatness by lying on its side, moving its eyes and deforming its skull. Flatfish are poorly designed. But the poor design comes from their evolutionary heritage. We know from their family tree that flounders, like all flatfish, evolved from “normal” symmetrical fish. Evidently, they found it advantageous to tip onto their sides and lie on the sea floor, hiding themselves from both predators and prey. This, of course, created a problem: the bottom eye would be both useless and easily injured. To fix this, natural selection took the tortuous but available route of moving its eye about, as well as otherwise deforming its body.
Let’s begin with one observation that strikes anyone who travels widely. If you go to two distant areas that have similar climate and terrain, you find different types of life. Take deserts. Many desert plants are succulents: they show an adaptive combination of traits that include large fleshy stems to store water, spines to deter predators, and small or missing leaves to reduce water loss. But different deserts have different types of succulents. In North and South America, the succulents are members of the cactus family. But in the deserts of Asia, Australia, and Africa, there are no native cacti, and the succulents belong to a completely different family, the euphorbs. You can tell the difference between the two types of succulents by their flowers and their sap, which is clear and watery in cacti but milky and bitter in euphorbs. Yet despite these fundamental differences, cacti and euphorbs can look very much alike. I have both types growing on my windowsill, and visitors can’t tell them apart without reading their tags.
The most famous example of different species filling similar roles involves the marsupial mammals, now found mainly in Australia (the Virginia opossum is a familiar exception), and placental mammals, which predominate elsewhere in the world. The two groups show important anatomical differences, most notably in their reproductive systems (almost all marsupials have pouches and give birth to very undeveloped young, while placentals have placentas that enable young to be born at a more advanced stage). Nevertheless, in other ways some marsupials and placentals are astonishingly similar. There are burrowing marsupial moles that look and act just like placental moles, marsupial mice that resemble placental mice, the marsupial sugar glider, which glides from tree to tree just like a flying squirrel, and marsupial anteaters, which do exactly what South American anteaters do.
One of the marvels of evolution is the Asian giant hornet, a predatory wasp especially common in Japan. It’s hard to imagine a more frightening insect. The world’s largest hornet, it is as long as your thumb, with a two-inch body bedecked with menacing orange and black stripes. It’s armed with fearsome jaws to clasp and kill its insect prey, and also a quarter-inch stinger that proves lethal to several dozen Asians a year. And with a 3-inch wingspan, it can fly 25 miles per hour (far faster than you can run), and can cover 60 miles in a single day.
This hornet is not only ferocious, but voracious. Its young larval grubs are fat, insatiable eating machines, who insistently rap their heads against the hive to signal their hunger for meat. To satisfy their relentless demands for food, adult hornets raid the nests of social bees and wasps.
One of the hornet’s prime victims is the introduced European honeybee. The raid on a honeybee nest involves a merciless mass slaughter that has few parallels in nature. It starts when a lone hornet scout finds a nest. With its abdomen, the scout marks the nest for doom, placing a drop of pheromone near the entrance of the bee colony. Alerted by this mark, the scout’s nestmates descend on the spot, a group of twenty or thirty hornets arrayed against a colony of up to thirty thousand honeybees.
But it’s no contest. Wading into the hive with jaws slashing, the hornets decapitate the bees one by one. With each hornet making bee heads roll at a rate of forty per minute, the battle is over in a few hours: every bee is dead, and body parts litter the hive. Then the hornets stock their larder. Over the next week, they systematically ravage the nest, eating honey and carrying the helpless bee grubs back to their own nests, where they are promptly deposited into the gaping mouths of the hornets’ own ravenous offspring.
This is nature red in tooth and claw, as the poet Tennyson described. The hornets are fearsome hunting machines, and the introduced bees are defenseless. But there are bees that can fight off the giant hornet: honeybees that are native to Japan. And their defense is stunning— another marvel of adaptive behavior. When the hornet scout first arrives at their hive, the honeybees near the entrance rush into the hive, calling nestmates to arms while luring the hornet inside. In the meantime, hundreds of worker bees assemble inside the entrance. Once the hornet is inside, it is mobbed and covered by a tight ball of bees. Vibrating their abdomens, the bees quickly raise the temperature inside the ball to about 47 degrees C. Bees can survive this temperature, but the hornet cannot. In twenty minutes the hornet scout is cooked to death, and—usually—the nest is saved. I can’t think of another case (save the Spanish Inquisition) in which animals kill their enemies by roasting them.
There are several evolutionary lessons in this twisted tale. The most obvious is that the hornet is marvelously adapted to kill—it looks as though it were designed for mass slaughter. Moreover, many traits work together to make the wasp a killing machine. They include body form (large size, stings, deadly jaws, big wings), chemicals (marking pheromones and deadly venom in the sting), and behavior (rapid flight, coordinated attacks on bee nests, and the larval “I am hungry” behavior that prompts the hornet attacks). And then there is the defense of the native honeybees—the coordinated swarming and subsequent roasting of their enemy—certainly an evolved response to repeated attacks by hornets. (Remember, this behavior is genetically encoded in a brain smaller than a pencil point.)
On the other hand, the recently introduced European honeybees are virtually defenseless against the hornet. This is exactly what we would expect, for those bees evolved in an area lacking giant predatory hornets, and therefore natural selection did not build a defense. We can predict, though, that if the hornets are sufficiently strong predators, the European bees will either die out (unless they are reintroduced), or will find their own evolutionary response to the hornets—and not necessarily the same one as the native bees.
Take the domestic dog (Canis lupus familiaris), a single species that comes in all shapes, sizes, colors, and temperaments. Every single one, purebred or mutt, descends from a single ancestral species—most likely the Eurasian gray wolf—that humans began to select about 10,000 years ago. The American Kennel Club recognizes 150 different breeds, and you’ve seen many of them: the tiny, nervous Chihuahua, perhaps bred as a food animal by the Toltec of Mexico; the robust Saint Bernard, thick of fur and able to fetch kegs of brandy to snow-stranded travelers; the greyhound, bred for racing with long legs and a streamlined shape; the elongated, short-legged dachshund, ideal for catching badgers in their holes; retrievers, bred to fetch game from the water; and the fluffy Pomeranian, bred as a comforting lap dog. Breeders have virtually sculpted these dogs to their liking, changing the shade and thickness of their coat, the length and pointiness of their ears, the size and shape of their skeleton, the quirks of their behavior and temperament, and nearly everything else.
The dog can stand for the success of other breeding programs. As Darwin noted in The Origin, “Breeders habitually speak of an animal’s organization as something quite plastic, which they can model almost as they please.” Cows, sheep, pigs, flowers, vegetables, and so on—all came from humans choosing variants present in wild ancestors, or variants that arose by mutation during domestication. Through selection, the svelte wild turkey has become our docile, meaty, and virtually tasteless Thanksgiving monster, with breasts so large that male domestic turkeys can no longer mount females, who must instead be artificially inseminated. Darwin himself bred pigeons, and described the huge variety of behaviors and appearance of different breeds, all selected from the ancestral rock dove. You wouldn’t recognize the ancestor of our ear of corn, which was an inconspicuous grass. The ancestral tomato weighed only a few grams, but has now been bred into a 2-pound behemoth (also tasteless) with a long shelf life. The wild cabbage has given rise to five different vegetables: broccoli, domestic cabbage, kohlrabi, Brussels sprouts, and cauliflower, each selected to modify a different part of the plant (broccoli, for example, is simply a tight, enlarged cluster of flowers). And the domestication of all wild crop plants occurred within last 12,000 years.
But “laboratory” adaptations can also be more complex, involving the evolution of whole new biochemical systems. Perhaps the ultimate challenge is simply to take away a gene that a microbe needs to survive in a particular environment, and see how it responds. Can it evolve a way around this problem? The answer is usually yes. In a dramatic experiment, Barry Hall and his colleagues at the University of Rochester began a study by deleting a gene from E. coli. This gene produces an enzyme that allows the bacteria to break down the sugar lactose into subunits that can be used as food. The geneless bacteria were then put in an environment containing lactose as the only food source. Initially, of course, they lacked the enzyme and couldn’t grow. But after only a short time, the function of the missing gene was taken over by another enzyme that, while previously unable to break down lactose, could now do so weakly because of a new mutation. Eventually, yet another adaptive mutation occurred: one that increased the amount of the new enzyme so that even more lactose could be used. Finally, a third mutation at a different gene allowed the bacteria to take up lactose from the environment more easily. All together, this experiment showed the evolution of a complex biochemical pathway that enabled bacteria to grow on a previously unusable food. Beyond demonstrating evolution, this experiment has two important lessons. First, natural selection can promote the evolution of complex, interconnected biochemical systems in which all the parts are codependent, despite the claims of creationists that this is impossible. Second, as we’ve seen repeatedly, selection does not create new traits out of thin air: it produces “new” adaptations by modifying preexisting features.
We can even see the origin of new, ecologically diverse bacterial species, all within a single laboratory flask. Paul Rainey and his colleagues at Oxford University placed a strain of the bacteria Pseudomonas fluorescens in a small vessel containing nutrient broth, and simply watched it. (It’s surprising but true that such a vessel actually contains diverse environments. Oxygen concentration, for example, is highest on the top and lowest on the bottom.) Within ten days—no more than a few hundred generations—the ancestral free-floating “smooth” bacterium had evolved into two additional forms occupying different parts of the beaker. One, called “wrinkly spreader,” formed a mat on top of the broth. The other, called “fuzzy spreader,” formed a carpet on the bottom. The smooth ancestral type persisted in the liquid environment in the middle. Each of the two new forms was genetically different from the ancestor, having evolved through mutation and natural selection to reproduce best in their respective environments. Here, then, is not only evolution but speciation occurring the lab: the ancestral form produced, and coexisted with, two ecologically different descendants, and in bacteria such forms are considered distinct species. Over a very short time, natural selection on Pseudomonas yielded a small-scale “adaptive radiation,” the equivalent of how animals or plants form species when they encounter new environments on an oceanic island.
Another prime example of selection is resistance to penicillin. When it was introduced in the early 1940s, penicillin was a miracle drug, especially effective at curing infections caused by the bacteria Staphylococcus aureus (“staph”). In 1941, the drug could wipe out every strain of staph in the world. Now, seventy years later, more than 95 percent of staph strains are resistant to penicillin. What happened was that mutations occurred in individual bacteria that gave them the ability to destroy the drug, and of course these mutations spread worldwide. In response, the drug industry came up with a new antibiotic, methicillin, but even that is now becoming useless due to newer mutations. In both cases, scientists have identified the precise changes in the bacterial DNA that conferred drug resistance.
Still other species have adapted via selection to human-caused changes in their environment. Insects have become resistant to DDT and other pesticides, plants have adapted to herbicides, and fungi, worms, and algae have evolved resistance to heavy metals that have polluted their environment. There almost always seem to be a few individuals with lucky mutations that allow them to survive and reproduce, quickly evolving a sensitive population into a resistant one. We can then make a reasonable inference: when a population encounters a stress that doesn’t come from humans, such as a change in salinity, temperature, or rainfall, natural selection will often produce an adaptive response.
Here’s another prediction: under prolonged drought, natural selection will lead to the evolution of plants that flower earlier than their ancestors. This is because, during a drought, soils dry out quickly after the rains. If you’re a plant that doesn’t flower and produce seeds quickly in a drought, you leave no descendants. Under normal weather conditions, on the other hand, it pays to delay flowering so that you can grow larger and produce even more seeds.
This prediction was tested in a natural experiment involving the wild mustard plant (Brassica rapa), introduced to California about 300 years ago. Beginning in 2000, Southern California suffered a severe five-year drought. Arthur Weis and his colleagues at the University of California measured the flowering time of mustards at the beginning and end of this period. Sure enough, natural selection had changed flowering time in precisely the predicted way: after the drought, plants began to flower a week earlier than their ancestors did.
One approach is to compare the rates of evolution in the fossil record with those seen in laboratory experiments that used artificial selection, or with historical data on evolutionary change that occurred when species colonized new habitats in historical times. If evolution in the fossil record were much faster than in laboratory experiments or colonization events—both of which involve very strong selection—we might need to rethink whether selection could explain changes in fossils. But in fact the results are just the opposite. Philip Gingerich at the University of Michigan showed that rates of change in animal size and shape during laboratory and colonization studies are actually much faster than rates of fossil change: from 500 times faster (selection during colonizations) to nearly a million times faster (laboratory selection experiments). And even the fastest rates of evolution in the fossil record are nowhere near as fast as the slowest rates seen when humans practice selection in the laboratory. Further, the average rates of evolution seen in colonization studies are large enough to turn a mouse into the size of an elephant in just 10,000 years!
A possible sequence of such changes begins with simple eyespots made of light-sensitive pigment, as seen in flatworms. The skin then folds in, forming a cup that protects the eyespot and allows it to better localize the light source. Limpets have eyes like this. In the chambered nautilus, we see a further narrowing of the cup’s opening to produce an improved image, and in ragworms the cup is capped by a protective transparent cover to protect the opening. In abalones, part of the fluid in the eye has coagulated to form a lens, which helps focus light, and in many species, such as mammals, nearby muscles have been co-opted to move the lens and vary its focus. The evolution of a retina, an optic nerve, and so on, follows by natural selection. Each step of this hypothetical transitional “series” confers increased adaptation on its possessor, because it enables the eye to gather more light or form better images, both of which aid survival and reproduction. And each step of this process is feasible because it is seen in the eye of a different living species. At the end of the sequence we have the camera eye, whose adaptive evolution seems impossibly complex. But the complexity of the final eye can be broken down into a series of small, adaptive steps.
True, breeders haven’t turned a cat into a dog, and laboratory studies haven’t turned a bacterium into an amoeba (although, as we’ve seen, new bacterial species have arisen in the lab). But it is foolish to think that these are serious objections to natural selection. Big transformations take time—huge spans of it. To really see the power of selection, we must extrapolate the small changes that selection creates in our lifetime over the millions of years that it has really had to work in nature. We can’t see the Grand Canyon getting deeper, either, but gazing into that great abyss, with the Colorado River carving away insensibly below, you learn the most important lesson of Darwinism: weak forces operating over long periods of time create large and dramatic change.
Sexual selection doesn’t end with the sex act itself: males can continue to compete even after mating. In many species, females mate with more than one male over a short period of time. After a male inseminates a female, how can he prevent other males from fertilizing her and stealing his paternity? This postmating competition has produced some of the most intriguing features built by sexual selection. Sometimes a male hangs around after mating, guarding his female against other suitors. When you see a pair of dragonflies attached to each other, it’s likely that the male is simply guarding the female after having fertilized her, physically blocking access by other males. A Central American millipede has taken mate guarding to the extreme: after fertilizing a female, the male simply rides her for several days, preventing any competitor from claiming her eggs. Chemicals can also do this job. The ejaculate of some snakes and rodents contains substances that temporarily plug up a female’s reproductive tract after mating, barricading out other probing males. In the group of fruit flies on which I work, the male injects the female with an anti-aphrodisiac, a chemical in his semen that makes her unwilling to remate for several days.
A vivid demonstration of this difference can be seen by looking up the record number of children sired by a human female versus a male. If you were to guess the maximum number of children that a woman could produce in a lifetime, you’d probably say around fifteen. Guess again. The Guinness Book of World Records gives the “official” record number of children for a woman as sixty-nine, produced by an eighteenth century Russian peasant. In twenty-seven pregnancies between 1725 and 1745, she had sixteen pairs of twins, seven sets of triplets, and four sets of quadruplets. (She presumably had some physiological or genetic predisposition to multiple births.) One weeps for this belabored woman, but her record is far surpassed by that of a male, one Mulai Ismail (1646–1727), an emperor of Morocco. Ismail was reported by Guinness as having fathered “at least 342 daughters and 525 sons, and by 1721 he was reputed to have 700 male descendants.” Even at these extremes, then, males outstrip females more than tenfold.
The evolutionary difference between males and females is a matter of differential investment—investment in expensive eggs versus cheap sperm, investment in pregnancy (when females retain and nourish the fertilized eggs), and investment in parental care in the many species in which females alone raise the young. For males, mating is cheap; for females it’s expensive. For males, a mating costs only a small dose of sperm; for females it costs much more: the production of large, nutrientrich eggs and often a huge expenditure of energy and time. In more than 90 percent of mammal species, a male’s only investment in offspring is his sperm, for females provide all the parental care.
This asymmetry between males and females in potential numbers of mates and offspring leads to conflicting interests when it comes time to choose a mate. Males have little to lose by mating with a “substandard” female (say, one who is weak or sickly), because they can easily mate again, and repeatedly. Selection then favors genes that make a male promiscuous, relentlessly trying to mate with nearly any female. (Or any thing bearing the slightest resemblance to a female—male sage grouse, for instance, sometimes try to mate with piles of cow manure, and, as we learned earlier, some orchids get pollinated by luring randy male bees to copulate with their petals.)
Females are different. Because of their higher investment in eggs and offspring, their best tactic is to be picky rather than promiscuous. Females must make each opportunity count by choosing the best possible father to fertilize their limited number of eggs. They should therefore inspect potential mates very closely.
What this adds up to is that, in general, males must compete for females. Males should be promiscuous, females coy. The life of a male should be one of internecine conflict, constantly vying with his fellows for mates. The good males, either more attractive or more vigorous, will often secure a large number of mates (they will presumably be preferred by more females, too), while substandard males go unmated. Almost all females, on the other hand, will eventually find mates. Since every male is competing for them, their distribution of mating success will be more even.
Mayr lived exactly 100 years, producing a stream of books and papers up to the day of his death. Among these was his 1963 classic, Animal Species and Evolution, the very book that made me want to study evolution. In it Mayr recounted a striking fact. When he totaled up the names that the natives of New Guinea’s Arfak Mountains applied to local birds, he found that they recognized 136 different types. Western zoologists, using traditional methods of taxonomy, recognized 137 species. In other words, both locals and scientists had distinguished the very same species of birds living in the wild. This concordance between two cultural groups with very different backgrounds convinced Mayr, as it should convince us, that the discontinuities of nature are not arbitrary, but an objective fact.
And when we think of why we feel that brown-eyed and blue-eyed humans, or Inuit and !Kung, are members of the same species, we realize that it’s because they can mate with each other and produce offspring that contain combinations of their genes. In other words, they belong to the same gene pool. When you ponder cryptic species, and variation within humans, you arrive at the notion that species are distinct not merely because they look different, but because there are barriers between them that prevent interbreeding.
Ernst Mayr and the Russian geneticist Theodosius Dobzhansky were the first to realize this, and in Mayr proposed a definition of species that has become the gold standard for evolutionary biology. Using the reproductive criterion for species status, Mayr defined a species as a group of interbreeding natural populations that are reproductively isolated from other such groups. This definition is known as the biological species concept, or BSC. “Reproductively isolated” simply means that members of different species have traits—differences in appearance, behavior, or physiology—that prevent them from successfully interbreeding, while members of the same species can interbreed readily.
What keeps members of two related species from mating with each other? There are many different reproductive barriers. Species might not interbreed simply because their mating or flowering seasons don’t overlap. Some corals, for example, reproduce only one night a year, spewing out masses of eggs and sperm into the sea over a several-hour period. Closely related species living in the same area remain distinct because their peak spawning periods are several hours apart, preventing eggs of one species from meeting sperm from another. Animal species often have different mating displays or pheromones, and don’t find each other sexually attractive. Females in my Drosophila species have chemicals on their abdomens that males of other species find unappealing. Species can also be isolated by preferring different habitats, so they simply don’t encounter each other. Many insects can feed and reproduce on only one single species of plant, and different species of insects are restricted to different species of plants. This keeps them from meeting each other at mating time. Closely related species of plants can be kept apart because they use different pollinators. Two species of the monkeyflower Mimulus, for example, live in the same area of the Sierra Nevada, but rarely interbreed because one species is pollinated by bumblebees and the other by hummingbirds.
Isolating barriers can also act after mating. Pollen from one plant species might fail to germinate on the pistil of another. If fetuses are formed, they might die before birth; this is what happens when you cross a sheep with a goat. Or even if hybrids survive, they may be sterile: the classic example is the vigorous but sterile mule, the offspring of a female horse and a male donkey. Species that produce sterile hybrids certainly can’t exchange genes.
The way we discovered how species arise resembles the way astronomers discovered how stars “evolve” over time. Both processes occur too slowly for us to see them happening over our lifetime. But we can still understand how they work by finding snapshots of the process at different evolutionary stages and putting these snapshots together into a conceptual movie. For stars, astronomers saw dispersed clouds of matter (“star nurseries”) in galaxies. Elsewhere they saw those clouds condensing into protostars. And in other places they saw protostars becoming full stars, condensing further and then generating light as their core temperature became high enough to fuse hydrogen atoms into helium. Other stars were large “red giants” like Betelgeuse; some showed signs of throwing off their outer layers into space; and others still were small, dense white dwarfs. By assembling all these stages into a logical sequence, based on what we know of their physical and chemical structure and behavior, we’ve been able to piece together how stars form, persist, and die. From this picture of stellar evolution, we can make predictions. We know, for example, that stars about the size of our Sun shine steadily for about ten billion years before bulging out to form red giants. Since the Sun is about . billion years old, we know that we’re roughly halfway through our tenure as a planet before we’ll finally be swallowed up by the Sun’s expansion.
And so it is with speciation. We see geographically isolated populations running the gamut from those showing no reproductive isolation, through those having increasing degrees of reproductive isolation (as the populations become isolated for longer periods), and, finally, complete speciation. We see young species, descended from a common ancestor, on either side of geographic barriers like rivers or the Isthmus of Panama, and on different islands of an archipelago. Putting all this together, we conclude that isolated populations diverge, and that when that divergence has gone on for a sufficiently long time, reproductive barriers develop as a by-product of evolution.
We’ve always perceived ourselves as somehow standing apart from the rest of nature. Encouraged by the religious belief that humans were the special object of creation, as well as by a natural solipsism that accompanies a self-conscious brain, we resist the evolutionary lesson that, like other animals, we are contingent products of the blind and mindless process of natural selection.
The idea that humans are part of nature has been anathema over most of the history of biology. In 1735, the Swedish botanist Carl Linnaeus, who established biological classification, lumped humans, whom he named Homo sapiens (“man the wise”), with monkeys and apes based on anatomical similarity. Linnaeus didn’t suggest an evolutionary relationship between these species—his intention was explicitly to reveal the order behind God’s creation—but his decision was still controversial, and he incurred the wrath of his archbishop.
When Lucy’s hundreds of fragments were assembled, she turned out to be a female of a new species, Australopithecus afarensis, dating back 3.2 million years. She was between 20 and 30 years old, 3.5feet tall, weighing a scant 60 pounds, and possibly afflicted with arthritis. But most important, she walked on two legs.
How can we tell? From the way that the femur (thighbone) connects to the pelvis at one end and to the knee at its other. In a bipedally walking primate like ourselves, the femurs angle in toward each other from the hips so that the center of gravity stays in one place while walking, allowing an efficient fore-and-aft bipedal stride. In knuckle-walking apes, the femurs are slightly splayed out, so when walking upright they have a bowlegged waddle, like Charlie Chaplin’s little tramp. If you take a primate fossil, then, and look at how the femur fits together with the pelvis, you can tell whether the creature walked on two legs or four. If the femurs angle toward the middle, it’s bipedal. And Lucy’s angle in—at almost the same angle as those of modern humans. She walked upright. Her pelvis, too, resembles that of modern humans far more that of modern chimps.
team of paleoanthropologists led by Mary Leakey confirmed the bipedality of A. afarensis with another remarkable find in Tanzania: the famous “Laetoli footprints.” In 1976, Andrew Hill and another member of the team were taking a break by indulging in a favorite field pastime: pelting each other with chunks of dried elephant dung. Looking for ammunition in a dry stream bed, Hill stumbled upon a line of fossilized footprints. After careful excavation, the footprints turned out to be an -foot trail made by two hominins who had clearly been walking on two legs (there were no impressions of knuckles) during an ash storm from an erupting volcano. That storm was followed by a rain, which turned the ash into a cement-like layer that was later sealed in by another layer of dry ash, preserving the footprints.
The Laetoli footprints are virtually identical to those made by modern humans walking on soft ground. And the feet were almost certainly from Lucy’s kin: the tracks are the right size, and the trail dates from around 3.6 million years ago, a time when A. afarensis was the only hominin of record. What we have here is that rarest of finds—fossilized human behavior. One of the tracks is larger than the other, so they were probably made by a male and female (other afarensis fossils have shown sexual dimorphism in size). The female’s footprints seem a bit deeper on one side than on the other, so she may have been carrying an infant on her hip. The trail evokes visions of a small, hairy couple making their way across the plain during a volcanic eruption. Were they frightened, and holding hands?
But recent work shows that our genetic resemblance to our evolutionary cousins is not quite as close as we thought. Consider this. A 1.5 percent difference in protein sequence means that when we line up the same protein (say, hemoglobin) of humans and chimps, on average we’ll see a difference at just one out of every 100 amino acids. But proteins are typically composed of several hundred amino acids. So a 1.5 percent difference in a protein 300 amino acids long translates into about four differences in the total protein sequence. (To use an analogy, if you change only 1 percent of the letters on this page, you will alter far more than 1 percent of the sentences.) That oft-quoted 1.5 percent difference between ourselves and chimps, then, is really larger than it looks: a lot more than 1.5 percent of our proteins will differ by at least one amino acid from the sequence in chimps. And since proteins are essential for building and maintaining our bodies, a single difference can have substantial effects.
Now that we’ve finally sequenced the genomes of both chimp and human, we can see directly that more than 80 percent of all the proteins shared by the two species differ in at least one amino acid. Since our genomes have about 25,000 protein-making genes, that translates to a difference in the sequence of more than 20,000 of them. That’s not a trivial divergence. Obviously, more than a few genes distinguish us. And molecular evolutionists have recently found that humans and chimps differ not only in the sequence of genes, but also in the presence of genes. More than 6 percent of genes found in humans simply aren’t found in any form in chimpanzees. There are over 1,400 novel genes expressed in humans but not in chimps. We also differ from chimps in the number of copies of many genes that we do share. The salivary enzyme amylase, for example, acts in the mouth to break down starch into digestible sugar. Chimps have but a single copy of the gene, while individual humans have between two and sixteen, with an average of six copies. This difference probably resulted from natural selection to help us digest our food, as the ancestral human diet was probably much richer in starch than that of fruit-eating apes.
Putting this together, we see that the genetic divergence between ourselves and chimpanzees comes in several forms—changes not only in the proteins produced by genes, but also in the presence or absence of genes, the number of gene copies, and when and where genes are expressed during development. We can no longer claim that “humanness” rests on only one type of mutation, or changes in only a few key genes. But this is not really surprising if you think about the many traits that distinguish us from our closest relatives. There are differences not only in anatomy, but also in physiology (we are the sweatiest of apes, and the only ape whose females have concealed ovulation), behavior (humans pair-bond and other apes do not), language, and brain size and configuration (surely there must also be many differences in how the neurons in our brains are hooked up). Despite our general resemblance to our primate cousins, then, evolving a human from an ape-like ancestor probably required substantial genetic change.
In response to these distasteful episodes of racism, some scientists have overreacted, arguing that human races have no biological reality and are merely sociopolitical “constructs” that don’t merit scientific study. But to biologists, race—so long as it doesn’t apply to humans!— has always been a perfectly respectable term. Races (also called “subspecies” or “ecotypes”) are simply populations of a species that are both geographically separated and differ genetically in one or more traits. There are plenty of animal and plant races, including those mouse populations that differ only in coat color, sparrow populations that differ in size and song, and plant races that differ in the shape of their leaves. Following this definition, Homo sapiens clearly does have races. And the fact that we do is just another indication that humans don’t differ from other evolved species.
As we would expect from evolution, human physical variation occurs in nested groups, and in spite of valiant efforts by some to create formal divisions of races, exactly where one draws the line to demarcate a particular race is completely arbitrary. There are no sharp boundaries: the number of races recognized by anthropologists has ranged from three to more than thirty. Looking at genes shows even more clearly the lack of sharp differences between races: virtually all the genetic variation uncovered by modern molecular techniques correlates only weakly with the classical combinations of physical traits such as skin color and hair type commonly used to determine race.
Direct genetic evidence, accumulated over the last three decades, shows that only about 10 to 15 percent of all genetic variation in humans is represented by differences between “races” that are recognized by difference in physical appearance. The remainder of the genetic variation, 85 to 90 percent, occurs among individuals within races.
Most of the genetic differences between races are trivial. And yet others, those physical differences between a Japanese individual and a Finn, a Masai, and an Inuit, are striking. We have the interesting situation, then, that the overall differences in gene sequences between peoples are minor, yet those same groups show dramatic differences in a range of visually apparent traits, such as skin color, hair color, body form, and nose shape. These obvious physical differences are not characteristic of the genome as a whole. So why has the small amount of divergence that has occurred between human populations become focused on such visually striking traits?
Some of these differences make sense as adaptations to the different environments in which early humans found themselves. The darker skin of tropical groups probably provides protection from intense ultraviolet light that produces lethal melanomas, while the pale skin of higherlatitude groups allows penetration of light necessary for the synthesis of essential vitamin D, which helps prevent rickets and tuberculosis. But what about the eye folds of Asians, or the longer noses of Caucasians? These don’t have any obvious connection to the environment. For some biologists, the existence of greater variation between races in genes that affect physical appearance, something easily assessed by potential mates, points to one thing: sexual selection.
Apart from the characteristic pattern of genetic variation, there are other grounds for considering sexual selection as a strong driving force for the evolution of races. We are unique among species for having developed complex cultures. Language has given us a remarkable ability to disseminate ideas and opinions. A group of humans can change their culture much faster than they can evolve genetically. But the cultural change can also produce genetic change. Imagine that a spreading idea or fad involves the preferred appearance of one’s mate. An empress in Asia, for example, might have a penchant for men with straight black hair and almond-shaped eyes. By creating a fashion, her preference spreads culturally to all her female subjects, and, lo and behold, over time the curlyhaired and round-eyed individuals will be largely replaced by individuals with straight black hair and almond-shaped eyes. It is this “gene-culture coevolution”—the idea that a change in cultural environment leads to new types of selection on genes—that makes the idea of sexual selection for physical differences especially appealing.
One case involves our ability to digest lactose, a sugar found in milk. An enzyme called lactase breaks down this sugar into the more easily absorbed sugars glucose and galactose. We are born with the ability to digest milk, of course, for that’s always been the main food of infants. But after we’re weaned, we gradually stop producing lactase. Eventually, many of us entirely lose our ability to digest lactose, becoming “lactose intolerant” and prone to diarrhea, bloating, and cramps after eating dairy products. The disappearance of lactase after weaning is probably the result of natural selection: our ancient ancestors had no source of milk after weaning, so why produce a costly enzyme when it’s not needed?
But in some human populations, individuals continue to produce lactase throughout adulthood, giving them a rich source of nutrition unavailable to others. It turns out that lactase persistence is found mainly in populations that were, or still are, “pastoralists”—that is, populations who raise cows. These include some European and Middle Eastern populations, as well as Africans such as Masai and Tutsi. Genetic analysis show that the persistence of lactase in these populations depends on a simple change in the DNA that regulates the enzyme, keeping it turned on beyond infancy. There are two alleles of the gene—the “tolerant” (on) and “intolerant” (off ) form—and they differ in only a single letter of their DNA code. The frequency of the tolerant allele correlates well with whether populations use cows: it’s high (50 to 90 percent) in pastoralist populations of Europe, the Middle East, and Africa, and very low (1 to 20 percent) in Asian and African populations that depend on agriculture rather than milk.
Archaeological evidence shows that humans began domesticating cows between 7,000 and 9,000 years ago in Sudan, and the practice spread into sub-Saharan Africa and Europe a few thousand years later. The nice part of this story is that we can, from DNA sequencing, determine when the “tolerant” allele arose by mutation. That time, between 3,000 and 8,000 years ago, fits remarkably well with the rise of pastoralism. What’s even nicer is that DNA extracted from 7,000-year-old European skeletons showed that they were lactose-intolerant, as we expect if they weren’t yet pastoral.
The evolution of lactose tolerance is another splendid example of gene-culture coevolution. A purely cultural change (the raising of cows, perhaps for meat) produced a new evolutionary opportunity: the ability to use those cows for milk. Given the sudden availability of a rich new source of food, ancestors possessing the tolerance gene must have had a substantial reproductive advantage over those carrying the intolerant gene. In fact, we can calculate this advantage by observing how fast the tolerance gene increased to the frequencies seen in modern populations. It turns out that tolerant individuals must have produced, on average, 4 to 10 percent more offspring than those who were intolerant. That is pretty strong selection.
Anybody who teaches human evolution is inevitably asked: Are we still evolving? The examples of lactose tolerance and duplication of the amylase gene show that selection has certainly acted within the last few thousand years. But what about right now? It’s hard to give a good answer. Certainly many types of selection that challenged our ancestors no longer apply: improvements in nutrition, sanitation, and medical care have done away with many diseases and conditions that killed our ancestors, removing potent sources of natural selection. As the British geneticist Steve Jones notes, 500 years ago a British infant had only 50 percent chance of surviving to reproductive age, a figure that has now risen to 99 percent. And for those who do survive, medical intervention has allowed many to lead normal lives who would have been ruthlessly culled by selection over most of our evolutionary history. How many people with bad eyes, or bad teeth, unable to hunt or chew, would have perished on the African savanna? (I would certainly have been among the unfit.) How many of us have had infections that, without antibiotics, would have killed us? It’s likely that, due to cultural change, we are going downhill genetically in many ways. That is, genes that once were detrimental are no longer so bad (we can compensate for “bad” genes with a simple pair of eyeglasses or a good dentist), and these genes can persist in populations.
Conversely, genes that were once useful may, due to cultural change, now have destructive effects. Our love of sweets and fats, for example, may well have been adaptive in our ancestors, for whom such treats were a valuable but rare source of energy. But these once rare foods are now readily available, and so our genetic heritage brings us tooth decay, obesity, and heart problems. Too, our tendency to lay on fat from rich food may also have been adaptive during times when variation in local food abundance produced a feast-or-famine situation, giving a selective advantage to those who were able to store up calories for lean times.
Does this mean that we’re really de-evolving? To some degree, yes, but we’re probably also becoming more adapted to modern environments that create new types of selection. We should remember that so long as people die before they’ve stopped reproducing, and so long as some people leave more offspring than others, there is an opportunity for natural selection to improve us. And if there’s genetic variation that affects our ability to survive and leave children, it will promote evolutionary change. That is certainly happening now. Although pre-reproductive mortality is low in some Western populations, it’s high in many other places, especially Africa, where child mortality can exceed 25 percent. And that mortality is often caused by infectious diseases such as cholera, typhoid fever, and tuberculosis. Other diseases, like malaria and AIDS, continue to kill many children and adults of reproductive age.
At this point I could simply say, “I’ve given the evidence, and it shows that evolution is true. Q.E.D.” But I’d be remiss if I did that, because, like the businessman I encountered after my lecture, many people require more than just evidence before they’ll accept evolution. To these folks, evolution raises such profound questions of purpose, morality, and meaning that they just can’t accept it no matter how much evidence they see. It’s not that we evolved from apes that bothers them so much; it’s the emotional consequences of facing that fact. And unless we address those concerns, we won’t progress in making evolution a universally acknowledged truth. As the American philosopher Michael Ruse noted, “Nobody lies awake worrying about gaps in the fossil record. Many people lie awake worrying about abortion and drugs and the decline of the family and gay marriage and all of the other things that are opposed to so-called ‘moral values.’ ”
Nancy Pearcey, a conservative American philosopher and advocate of intelligent design, expressed this common fear:
Why does the public care so passionately about a theory of biology? Because people sense intuitively that there’s much more at stake than a scientific theory. They know that when naturalistic evolution is taught in the science classroom, then a naturalistic view of ethics will be taught down the hallway in the history classroom, the sociology classroom, the family life classroom, and in all areas of the curriculum.
Pearcey argues (and many American creationists agree) that all the perceived evils of evolution come from two worldviews that are part of science: naturalism and materialism. Naturalism is the view that the only way to understand our universe is through the scientific method. Materialism is the idea that the only reality is the physical matter of the universe, and that everything else, including thoughts, will, and emotions, comes from physical laws acting on that matter. The message of evolution, and all of science, is one of naturalistic materialism. Darwinism tells us that, like all species, human beings arose from the working of blind, purposeless forces over eons of time. As far as we can determine, the same forces that gave rise to ferns, mushrooms, lizards, and squirrels also produced us. Now, science cannot completely exclude the possibility of supernatural explanation. It is possible—though very unlikely—that our whole world is controlled by elves. But supernatural explanations like these are simply never needed: we manage to understand the natural world just fine using reason and materialism. Furthermore, supernatural explanations always mean the end of inquiry: that’s the way God wants it, end of story. Science, on the other hand, is never satisfied: our studies of the universe will continue until we go extinct.
Evolution is neither moral nor immoral. It just is, and we make of it what we will. I have tried to show that two things we can make of it are that it’s simple and it’s marvelous. And far from constricting our actions, the study of evolution can liberate our minds. Human beings may be only one small twig on the vast branching tree of evolution, but we’re a very special animal. As natural selection forged our brains, it opened up for us whole new worlds. We have learned how to improve our lives immeasurably over those of our ancestors, who were plagued with disease, discomfort, and a constant search for food. We can fly above the tallest mountains, dive deep below the sea, and even travel to other planets. We make symphonies, poems, and books to fulfill our aesthetic passions and emotional needs. No other species has accomplished anything remotely similar.
But there is something even more wondrous. We are the one creature to whom natural selection has bequeathed a brain complex enough to comprehend the laws that govern the universe. And we should be proud that we are the only species that has figured out how we came to be.
|Insects and other||Amphibians|
|arthropods (e.g., spiders)||Freshwater fish|
Further, when you look at the type of insects and plants native to oceanic islands, they are from groups that are the best colonizers. Most of the insects are small, precisely those that would be easily picked up by wind. Compared to weedy plants, trees are relatively rare on oceanic islands, almost certainly because many trees have heavy seeds that neither float nor are eaten by birds. (The coconut palm, with its large buoyant seeds, is a notable exception, occurring on almost all Pacific and Indian Ocean islands). The relative rarity of trees, in fact, explains why
Actually, the nested arrangement of life was recognized long before Darwin. Starting with the Swedish botanist Carl Linnaeus in , biologists began classifying animals and plants, discovering that they consistently fell into what was called a “natural” classification. Strikingly, different biologists came up with nearly identical groupings. This means that these groupings are not subjective artifacts of a human need to classify, but that they tell us something real and fundamental about nature. But nobody knew what that something was until Darwin came along, and showed that the nested arrangement of life is precisely what evolution predicts. Creatures with recent common ancestors share many traits, while those whose common ancestors lay in the distant past are more dissimilar. The “natural” classification is itself strong evidence for evolution.
...evolutionary change, even of a major sort, nearly always involves remodeling the old into the new. The legs of land animals are variations on the stout limbs of ancestral fish. The tiny middle ear bones of mammals are remodeled jawbones of their reptilian ancestors. The wings of birds were fashioned from the legs of dinosaurs. And whales are stretched-out land animals whose forelimbs have become paddles and whose nostrils have moved atop their head.
Hard problems often yield before science, and though we still don’t understand how every complex biochemical system evolved, we are learning more every day. After all, biochemical evolution is a field in still its infancy. If the history of science teaches us anything, it is that what conquers our ignorance is research, not giving up and attributing our ignorance to the miraculous work of a creator. When you hear someone claim otherwise, just remember these words of Darwin: “Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”
Vestigial genes can go hand in hand with vestigial structures. We mammals evolved from reptilian ancestors that laid eggs. With the exceptions of the “monotremes” (the order of mammals that includes the Australian spiny anteater and duck-billed platypus), mammals have dispensed with egg-laying, and mothers nourish their young directly through the placenta instead of by providing a storehouse of yolk. And mammals carry three genes that, in reptiles and birds, produce the nutritious protein vitellogenin, which fills the yolk sac. But in virtually all mammals these genes are dead, totally inactivated by mutations. Only the egg-laying monotremes still produce vitellogenin, having one active and two dead genes. What’s more, mammals like ourselves still produce a yolk sac—but one that is vestigial and yolkless, a large, fluid-filled balloon attached to the fetal gut. In the second month of human pregnancy, it detaches from the embryo.