99.9956% of the Earth’s History Took Place Before Our Species Existed


, , , , ,

Homo sapiens have been around for a long time.  Our species dates back some 200,000 years, but here’s the reality check….  The earth is 4.56 billion years old, and quite simply, 99.9956% of the earth’s history took place without our species in it.  I’ll say that again: 99.9956% (yes, that’s the third place to the right of the decimal that finally isn’t a 9!) of the history of the planet took place before our species even existed.  Single-celled microbes ruled the earth 17,500 times longer than we’ve been around.  Ants have been doing their thing 600 times longer than our species has been stepping on them.  Even 93% of human history predates our species of Homo sapiens.  (The genus Homo emerged by 2.8 million years ago; by the time we arose 200,000 years ago Homo erectus, for example, had already been around for 2 million years!)  In the end our species is not unlike a butterfly, fluttering around for a few weeks and believing — innocently and naively — that the earth was made just for it… but we’ve existed for 1/22,835ths of the earth’s existence!  That amounts to one four-thousandths of a single percent of the planet’s history (0.004%).

To give this number some context…

  • If the earth were a house built 100 years ago then we moved in 38 hours ago.
  • If the earth’s history were laid out over the distance of a mile, our existence would measure 2.77 inches.
  • You’d be 63 years old before Homo sapien history would represent a whole day in your life.
  • If the history of the world were bundled into a sum of $10,000, our contribution to the total would be… 44 cents.
  • If you changed out that ten thousand bucks into its constituent one million pennies and stacked them all atop one another, your “One Million Penny Tower” would reach nearly one mile in altitude, and our place would be just the top two and-a-half inches. 
  • Taking into account a man of six-foot height, .004% would be about 80 micrometers, or roughly the thickness of a human hair.
  • If the earth’s history were converted to the word count of the 1424-page War and Peace, we would first appear on the last page, in its last paragraph, making up the final 25 words.  How about if the history of the earth were the Protestant Bible?  Instead of appearing in Genesis we would only appear in its last 33 words.  Harry Potter and the Deathly Hallows?  The last 8 words of the book.  As Ron would say, bloody hell.
  • If the span of the planet’s history were laid out across the width of the continental USA, beginning at the Pacific, you would not encounter our species until the beach of the Atlantic.  Some 2680 miles and Homo sapien history would barely extend beyond the other coast’s low-tide beach, 620 feet.
  • The largest capacity football field in the United States is Michigan Stadium, with 107,601 seats.  If the number of seats were to represent the earth’s history, the history of our species would occupy 4 seats in the stadium.
  • If you were to pilot a time machine into the past at a speed of one million years an hour, you’d reach the beginning of our species in a brisk 12 minutes; it would take another 190 days to reach the beginning of the earth.  (And for that matter, an additional 381 days on top of that to reach the beginning of the universe!)
  • If the planet’s history were the height of the Eiffel Tower, we’d be half an inch.  The Statue of Liberty, ground-to-torch?  Four millimeters, only slightly more than its (2.5mm) copper coating.  The Empire State Building?  Two-thirds of an inch, the size of a separation between the granite slabs.
  • If the history of the earth were converted to the weight of one metric ton, the total weight of our species would be 43 grams — the weight of three empty 12 ounce aluminum soda cans, one empty two-liter plastic soda bottle, or 17 sugar cubes.
  • If we apply the same equations to speed… If the history of the earth were converted into the speed of a car speeding along at 75 mph, our history would represent 17 inches per hour… slower than a sun patch traverses the floor.
  • In a two hour movie, it’s seven frames of footage.
  • If the history of the earth were equated to the 24,901 mile circumference of the earth, Homo sapien history would be one single mile (1.09 mile, or 1.75 km).
  • Or how about this — look at your watch right now.  If the earth began exactly twenty four hours ago, count off the most recent three and-a-half seconds, and that’s it… that comprises the span of our species from beginning to the current day.  Three and-a-half seconds in twenty four hours.  And it’s only in the last second that Homo sapiens left Africa and invented Wal-Mart.  (I’m sure we’ve done other things too, but those were the first two to come to mind.)

For the record you can do this at home.  4.567 billion divided by 200,000 is 22,835; so just divide anything by 22,835 and that will provide you with the same .004%.  Example: There are 525,600 minutes in a year, divide that by 22,835 and you come up with a species (us) that has existed for 23 minutes in a 365-day year.

The Drop in the Ocean


, , , , , , , , , , , ,

In an earlier post I discussed genealogy, and the fact that everyone of Western descent is related to every successful child-bearing line of ninth century Europe, on average, some 36,650 times over.  Really, by the 10th generation of the rear-view mirror you find 1024 ancestors, by the 20th generation you have more than a million.  The takeaway:  In the past 600 years it has taken more than a million ancestors… to make YOU.  That’s just straightforward math.  By the 50th generation you break the one quadrillion mark… and there haven’t been one quadrillion humans in the entire span of our species, so that means there is some serious genealogical overlap!  Sometimes I get lost playing on the calculator; I remember doing most of this genealogy math back in 2000, and when I later saw such movies as Underworld, the 2003 vampire/werewolf flick in which there is just one last descendant of the Corvinus clan, I stifled a laugh, knowing what they didn’t.  The idea that there could be ONE last descendant of any person who lived 1500 years ago displays an inherent misunderstanding of how genealogy and descent works.  It’s like presuming the atoms from one teardrop shed into the ocean 1500 years ago could be drawn out today in-tact with a beaker.  Simple physics (and a previous post) shows that every human cell is made up of some 100 trillion atoms, and every teardrop is made up of tens of trillions of such cells.  The reality is that the one teardrop’s atoms have dispersed into every depth of the ocean… and not only into the ocean, but into the atmosphere through evaporation… back onto land through rainfall… into growing food in the ground… even into people who’ve eaten that food and breathed that air.  That tear shed 1500 years ago has literally seeped into the fabric of the entire world today.

Physicist Lawrence Krauss commonly makes reference to the fact that every breath we take contains a few stray atoms from Julius Caesar’s last breath… similar argument, this.  Now the same analogy would not necessarily be true of someone’s dying breath across the globe ten or fifty years ago, but 2000 years is enough time for those atoms to disperse far and wide.  The atoms in our solar system are between five billion and 13.7 billion years old; there is no atom on earth or in your body right now that could be under five billion years old; they are simply and endlessly recycled.  Whatever an atom is doing at this moment is assuredly not what it has always been doing, nor what it will continue doing.  Make no mistake; heavier atoms might become locked up in rocks for millennia, others in the bedrock of the earth for billions of years, but hydrogen, carbon, oxygen and nitrogen are part of a more interactive recycling chain in the atmosphere/biosphere.  These lighter elements are commonly recycled; they bond into molecules, disband, rejoin, become part of water, become part of something organic, travel the earth; in fact, outlive the earth.  With these simple dynamics, given enough time the atoms that at one particular moment came together to form a single teardrop will have dispersed over the entire globe and into billions of other objects.  Same principle in genetics.  To return to the concept of genealogy, Brutus does not seem to have had any successful issue, but any other of those assassins of Caesar that fateful Ides of March who had lineages that survived more than a few hundred years spread a single sperm that 2000 years later has seeped into the DNA of every person who possesses even a genetic toehold of Western heritage today… and not just once but literally millions of times over.

By the way, just to be clear, the same thing is true of pee.  Not to undermine the poetic lyricism of all that I just wrote above, but if someone peed into the ocean 1500 years ago…

The Fault in (Counting) Our Stars


, , , , , ,

Occasionally the immensity of the universe is laid bare in a single statistic.  The number of stars in our galaxy is estimated between 100 and 250 billion.  Why such a wide discrepancy?  Two reasons… (1) not all can be seen from our position in the galaxy, and (2) frankly, if you were to dedicate your entire life — night and day without break —  to counting the stars in our galaxy, you’d never even cross the one percentile threshold.

This is simple math:  The average human lifespan takes place over a span of 80 years, give or take.  Eighty years breaks down to exactly two-and-a-half billion seconds.  If you were to count all the stars in our galaxy — one second at a time — you would be counting for nearly 8000 years, nonstop.  One second = one star; 250 billion seconds = 8000 years.  Counting the stars of our galaxy would be a commitment of nearly 100 lifetimes.  Just to be clear, we’re not talking about visiting any of these stars… if indeed the number of stars is closer to that higher end of the estimate, it would be the year 10,015 before you finished simply COUNTING them.

Not so Simple… the Surprisingly Convoluted History of Occam’s Razor


, , , ,

 Occam’s Razor.  It is a basic scientific principle. And it says, all things being equal, the simplest explanation tends to be the right one.

— Ellie Arroway, in the 1997 film Contact

The line of dialogue above is actually uttered twice by Jodie Foster in Contact, a film that popularized the concept of Occam’s Razor, a simple but elegant axiom which forms the bedrock on which all logic, investigation and scientific principle rests.  As a consummate fact-checker I have long held this fundamental parsimony in high regard.  Carl Sagan (who authored Contact) referenced the axiom again in his Demon Haunted World:  “Occam’s Razor.  This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.”

But these references above are really not so much a quote from Ockham as they are a very faint suggestion of what he actually wrote.  Simply, you will not find Occam’s Razor anywhere in Ockham’s writings.  William of Ockham, c. 1287–1347, was an English Franciscan friar and theologian, and to consider him any father of modern scientific thought doesn’t quite match up with reality.  The closest he ever came to the above quotes in any extant writing is “Numquam ponenda est pluralitas sine necessitate,” or “Plurality must never be posited without necessity.”  His words, arising out of a theological work, spent the next few centuries morphing and falling out of context.

The axiom reemerged in 1510 thanks to Ockham enthusiast Alessandro Achillini, as “quia multipicantur entia sine necessitate,” and a century after that by John Punch as “Non sunt multiplicanda entia sine necessitate,” or “Entities [things] must not be multiplied beyond necessity.”  A generation after Punch, in 1672, Isaac Newton seemed familiar with this revised phrase as he wrote, “I see no reason why they, that adhere to any of those hypotheses, should seek for other Causes of these Effects, unless (to use the Objectors argument) they will multiply entities without necessity.”

And so the axiom has evolved over the past seven centuries; translated, twisted and reinterpreted so many times that now the quote attributed to Ockham (even, really, the original meaning) bears only slight semblance to the words written by him.  In fact, Aristotle and Ptolemy wrote lines that are far more similar, more than a thousand years before.

  • Aristotle: “We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses.”
  • Ptolemy:  “We consider it a good principle to explain the phenomena by the simplest hypothesis possible.”

Reality check: the axiom wasn’t even referred to as “Ockham’s Razor” until 1852!  Occam’s Razor may be a timeless truism, but the truth is, he never said it.

 ‘Occam’s Razor’ is a modern myth. There is nothing mediaeval in it, except the general sense of the post-mediaeval formula: Entia non sunt multiplicanda praeter necessitatem. This myth has come to full maturity and secured general assent, within the lifetime of many philosophers of the present day…

William Thorburn, 1918

 The widespread layperson’s formulation that ‘the simplest explanation is usually the correct one’ appears to have been derived from Occam’s razor.

Wikipedia (Emphasis added)

Why We Say It: The Origin of the Names of the Days and Months


, ,

As Neil Degrasse Tyson posted on Twitter on January 1, 2017:

 To all on the Gregorian Calendar, Happy New Year! A day that’s not astronomically significant…in any way…at all…whatsoever.

Our calendar is a fascinating monument to civilizations past.  The names of the months and the days of the week are quickly learned by any child, but behind those names are stories and meanings we don’t even consider today.  Old gods and ceremonies of sacrifice otherwise forgotten for millennia still live on today in the same place where one might inscribe appointment reminders.  The days and months of our 21st century calendar are populated by hobgoblins of ancient deities and bizarre customs long gone, today simply pronounced differently, with different emphasis on different syllables or using different vowels, while the week itself stands as an archaic tribute to the solar system.

Carpe Septimana…

First of all, any examination of the week we observe today in Western society is difficult to fathom without first considering its Roman origins some 20 centuries ago.  Essentially, it was the solar system gave us the days of the week.  The seven days of the week on the Roman calendar honored gods of the Roman pantheon; particularly, those seven most active celestial bodies that could be observed in the sky, the seven “wandering stars” visible to the naked eye that darted around the otherwise static skies.  Hence, the days of the week reflected the astronomy of the skies, much as the zodiac played a role in mapping divisions of the year.

  • dies Solis (our Sunday) — Named for the Sun
  • dies Lunae (our Monday) — named for the moon
  • dies Martis (our Tuesday) — named for Mars
  • dies Mercurii (our Wednesday) — named for Mercury
  • dies Iovis (our Thursday) — named for Jove (colloquial form of Jupiter)
  • dies Veneris (our Friday) — named for Venus
  • dies Saturni (our Saturday) — named for Saturn

It is important to note that these names remain largely in tact in the Romance languages of today.  Spanish, Italian, French, Portuguese and Romanian still retain the roots of these ancient names, altered only by time and geography — those two titanic forces that turn one word into many similar words… and the fact that the name of Solis was gradually retired in favor of a Judeo-Christian Dominicus or “Day of the Lord,” or in French, Dimanche, “God’s day;” though the English version of the calendar retains the Solis-Sun connection.

The English version of the week that we use today exposes deep and far-reaching Scandinavian/Germanic roots, not surprising in that English is ultimately a Germanic language.  Borrowed from the Roman template, it evidently emerged out of an era before the Latin version exchanged Solis with Dominicus.  This Germanic week replaces many of the distinctly Roman names with their corresponding deities from Norse and Northern European mythology.  For example, our “Friday” is a derivation of “Frigg’s day,” Frigg being the Norse counterpart to Roman Venus; likewise, in the German/English version of the week, the thunder-tossing Jove is dropped in favor of the thunder-throwing Thor.  One result of this conversion from Roman mythology to Norse mythology is that the former association with the visible bodies of the solar system became more oblique.  The German/English week today is akin to a half-fallen monument, its original configuration only visible in visiting its predecessor.  Today our week is a hodgepodge of four Norse gods, one Roman god/planet and the sun and the moon.

  • Sunday — Sun’s day
  • Monday — Moon’s day
  • Tuesday — Tyr’s day.  Tyr was the Norse god of war (Norse equivalent of Mars)
  • Wednesday — Woden’s day, named for Odin (or Woden, lead Norse god)
  • Thursday — Thor’s day
  • Friday — Frigg’s day, Norse goddess of love.  The only day of the week in both Norse or Roman mythology overtly named for a female
  • Saturday — Named for Saturn, this is the only day in English to retain its Roman origin

Being Wrong Four Months of the Year…

A “month” derives from the cycle of the “moon,” even today the similarity between the two words proves that the moon is the “sine qua non” (“without which there is none”) to the month.  The calendar we use today is a truly bizarre collective of archaic anachronisms and frankly… incorrect math.  Historically speaking, January’s position as the first month of the year is actually a pretty new concept!  Even as late as the Julian calendar, March was viewed as the official beginning of the year.  Our current Gregorian calendar was adopted by Catholic Europe upon its creation in 1582 but found itself subject to a much more gradual acceptance throughout Protestant Europe; it was not adopted by the English-speaking world until 1752.  It was this current Gregorian calendar that established January as the uncontested start of the year, but in so doing it left a peculiar “glitch in the machinery,” in that September (literally, “seventh month”) is now our ninth, October (“eighth month”) is now the tenth, and so on.  The last four months of the year are simply factually inaccurate.  One third of the year is just bad math.

During the middle ages, Charlemagne reinvented the twelve-month cycle with a host of entirely new names.  While his agricultural-based theme lasted almost a thousand years, no one finds Witumanoth (“wood month”) heading the September page today.  Even when reduced to month names that are nothing more than abstractions and inconsequential numbers, the leftover Roman calendar has proven to have more staying power than Charlemagne’s.

JanuaryIanuarius, named abstractly after the Latin term for a door, or more specifically, Ianus (Janus), the Roman god of doorways/gateways.  It must be understood that January and February were late additions to the Roman calendar; and as such, their position was shuffled around in early calendars.  Sometimes they ended the year, sometimes they began the year, and in some instances February actually preceded January, so that their order in relation to one another was reversed.  The Julian calendar essentially treated the two months as prologue, ushering in the year that would officially begin with March.  Look at any old newspaper printed in England or America before 1752 and it will identify the year at the top as “previous year/current year” (for example: February 1, 1732/33) until March 25, at which point it finally leaves behind any reference to the previous year.  Charlemagne’s calendar renamed January, appropriately, Wintarmanoth (“winter month”).

FebruaryFebruarius, named for the ceremonies of purification and rituals of sacrifice which occupied the second half of the month.  That’s right, we actually have a month named after sacrifice… but it’s the shortest month, if that makes any difference.  Really, this is not as odd as it might seem at first flush, as a slightly different custom of purification and sacrifice known as Lent would later fill the vacuum left behind by those earlier pagan customs.  Charlemagne declared the month Hornung, and while this word does not exist today it may be some variation on “mud month” or “antler-shedding month,” or simply “bastardized” month, perhaps because it is a shorter cycle.

MarchMartius, named for the Roman god of war.  In case you’re keeping track, he does have a day in the Latin week, too.  A Tuesday in March is just a double-dose of Mars.  For Charlemagne this was “Lent month,” Lentzinmanoth.

AprilAprilis, named for…?  April is interesting, because its etymology and any association is unclear.  Aphrodite?  The Latin verb (aperire, apertus) “to open?”  The origin of April seems to have been a mystery to scholars and writers even 20 centuries ago, so any chance of uncovering it today is unlikely.  The Roman senate briefly renamed it Neronius in honor of Nero; Charlemagne decided it was Ostarmanoth (“Easter month”), but as we all know, Easter isn’t always in April; it likes to hide… like one of its little eggs.

MayMaius, named for the Greek goddess Maia, mother of Hermes.  Ovid suggests a secondary association, named for maiores, or Month of the Elders.  Charlemagne’s calendar regarded May as Wonnemanoth (“joy month”).  Trivia:  May is the only month to not share its days with any other month in the year.

JuneIunius, named for Juno, wife of Jupiter.  As in with the case of May, Ovid suggests a secondary association, named for iuniores, or Month of the Young, to contrast with the previous month.  Whether named for Juno or for youth, it’s still more exciting than Charlemagne’s choice of Brachmanoth (“plowing month”).

July — Originally known as Quintilis (“fifth”), it was renamed by the Roman Senate for Julius Caesar, in honor of his birth month.  Charlemagne’s calendar celebrated it as Heuvimanoth (“hay month”).

August — Originally known as Sextilis (“sixth”), it was renamed in honor of Augustus.  One may be grateful that the following emperor Tiberius opted not to continue this tradition, or the remainder of the year would have been renamed by the end of the Julio-Claudian dynasty.  And there were still another 140 emperors to come after that…  For Charlemagne, August was Aranmanoth (“reaping month”).

SeptemberSeptember (“septem“=”seven”).  After Tiberius rejected the Senate’s attempt to rename it for him, Caligula and Domitian renamed the month Germanicus; despite the efforts of two different emperors the change didn’t stick.  And so the seventh month today remains the ninth.

OctoberOctober (“octo“=”eight”).  Charlemagne regarded it as Windumemanoth (“vintage month,” or to explain better, “month of wine”).  For the record, a month of wine would make Halloween a lot scarier.

NovemberNovember (“novem“=”nine”).  Charlemagne considered it Herbistmanoth (“harvest month”).

DecemberDecember (“decem“=”ten”).  Like a strange accounting error, our current twelve month year ends with something called the Tenth.  Charlemagne deemed it Heilagmanoth (“holy month”).

And that makes twelve months.  So why I am still writing?  Well, truth be told, the year still had one more surprise to offer….  There was another month, a lost month, and it remains today, even if most of it is hiding just out of our sight.

Mercedonius — The thirteenth month, also known as the “mensis Intercalaris.” This was a month that was only occasionally implemented on the Roman calendar following February, to even out the days of the year when and if necessary.  A phantom month, it was imposed arbitrarily and was finally discontinued altogether with the introduction of the Julian Calendar.  But here’s the thing… the thirteenth month is not entirely gone.  It’s still out there.  Today, one vestige of Mercedonius remains… in our elusive February 29.

For more information on the calendars of different cultures:


The Dynamic Earth and the Engine of Plate Tectonics


, , , ,

 In Lower Pomerania is the Diamond Mountain, which is two miles and a half high, two miles and a half wide, and two miles and a half in depth; every hundred years a little bird comes and sharpens its beak on it, and when the whole mountain is worn away by this, then the first second of eternity will be over.

— The Brothers Grimm, The Shepherd Boy

I’ve lived in beautiful old Savannah, Georgia since the 1980s, and being just across the river from South Carolina it was hard not to notice the kerfuffle caused just a few years ago when former SC governor Mark Sanford turned the phrase “hiking the Appalachian Trail” into a euphemism for “having an affair with your Argentine mistress…” (though curiously, “having an affair with your Argentine mistress” has surprisingly not caught on as the euphemism for hiking the Appalachians!).

The Appalachians are a curious range of rolling mountains.  Lacking the soaring verticality of other worldwide mountain ranges, the Appalachians look to us today like baby mountains.  Unlike the Alps or the Himalayas, there is no current tectonic activity there; the Appalachians are inactive.  And to be clear, they aren’t just dormant, these mountains are dead.  The Appalachians are ancient remnants, and they’ve been decaying for 200 million years. Think of them, if you like, as a parting gift from Africa.  Yes, we got the Appalachians thanks to Africa.  Mountain ranges are a result of colliding plate tectonics, some today are active and dynamic, others are merely remnants of ones that formerly were.  The Appalachians are an example of the latter, a visible leftover from the construction of the super-continent Pangaea, some 300 million years ago, as the African plate collided with the North American plate.

 Today the Appalachians, stretching from Maine to Georgia, seem a relatively benign, well-rounded sort of mountain range.  Such gentle rolling topography speaks to the power of erosion, for three hundred million years ago their jagged still-rising peaks soared six or seven miles high, rivaling today’s Himalayas as some of the mightiest mountains in the history of the world.

— Robert M. Hazen, The Story of Earth, p. 234

A similar fate befell the MacDonnell Ranges in Australia, which emerged out of the same geological epoch.  Today Australia is the flattest continent on earth; so flat, in fact, it’s concave.  Richard Smith:  “All other continents boast mighty mountain ranges—the Rockies, the Alps, the Andes, the Himalayas—but they’re all relative newcomers and mostly still growing.”

 In their heyday, central Australia’s MacDonnell Ranges would have been a mountaineer’s dream, as high, it’s thought, as any on Earth today. But… after a near-eternity of erosion, the diminished remnants of the MacDonnell Ranges still run in long, jagged wrinkles across the heart of Australia. From the air, they protrude into this ancient landscape like the bony skeleton over which the dry skin of a tired continent is draped.

— Richard Smith (http://www.pbs.org/wgbh/nova/earth/australia-first-years.html#australia-explodes)

And the Rockies…?

 The Rocky Mountains.  Miles high, stretching all the way from New Mexico to Canada, you’d think they’ve been here forever.  But you’d be wrong.  These majestic mountains have come and gone several times.

— Kirk Johnson (http://www.pbs.org/wgbh/nova/earth/making-north-america.html#north-america-origins)

Long before the mountains we see today there were previous mountains that raindrops washed away over hundreds of millions of years; this realization alone illustrates the deep age of the earth.  The Rockies of today are only the most recent iteration; the “Ancestral Rockies” eroded into sand grains eons before the peaks of today rose.

If the history of the earth were compressed into the span of a single year even these longest-lived mountain ranges of 300 million years would be here and gone in three-and-a-half weeks’ time, roughly the time it would take for a nasty cut on your arm to scab over and disappear.  An individual mountain would rise and erode in the span of a few days, a legacy not unlike an acne breakout.

Mountains, once poetically revered by man as eternal, are in fact ever-changing and mortal, even if not exactly ephemeral or fleeting.  The Alps and the Himalayas today are majestic, massive and soaring, but the mechanics and the physics behind them has much in common with your ordinary car crash.  It’s said that it’s impossible to look away from a car crash; maybe on some level that’s why mountain ranges are equally mesmerizing.  Mountain ranges are simply a slow-motion ripple effect as one plate or series of plates collides with another, whether it be the Indian subcontinent colliding with the Asian plate (Himalayas), or the African plates colliding with the European plates (Alps).  Both are simply the “flash point” of the crash.  As observed by the late-great narrator Edward Herrmann in the original 2007 History Channel documentary How the Earth was Made:  “The continental crust along the collision point experiences extreme pressure, and the solid rock itself is warped and buckled.”

 Plate tectonics are responsible for all of the earth’s mountain ranges.  And over millions of years of growth the only thing that has stopped them grinding inexorably skywards is erosion, erosion by snow, wind and rain….  The height of mountains around the world are determined by these two opposing forces: uplift and erosion, changing them by fractions of an inch, up or down, each year.

How the Earth was Made, History Channel, 2007

And make no mistake, “inches” truly means inches… as imperceptible to time as imperceptible to touch.  Take a moment to consider the store-bought globe that one might find at any local store or a teacher’s desk; it features geography that undulates by fractions of an inch, and whose mountain ranges are easily perceptible by running one’s fingers over the surface.  But in reality these features are grossly exaggerated.  As astrophysicist Neil DeGrasse Tyson has observed, if the earth were accurately scaled down to the scale of any store-bought globe all those undulations of the earth would effectively disappear within the dermal ridges of the fingerprint of any individual.  In short, the 29,000 ft. elevation of Mt. Everest and the -36,000 ft. depression of the Marianas Trench would amount to less than a single ridge of your fingerprint.  The 250+ people who have died ascending Everest have died in the space less than we could even notice with a single touch.  This is the environment in which we fragile creatures live and die.

The world around us is in constant motion, but at a scale we cannot appreciate and at a rate that makes the growth of even the largest tree seem downright manic.  A slight spoiler alert (though you won’t be around to see it):  Over the next tens of millions of years the African plate will continue north, closing up the Mediterranean Sea, and mountains will undulate and ripple over the European continent in a slow-moving impact not unlike ripples in a pond.  And as the Pacific plate continues to rub against the North American plate, Los Angeles (ever creeping northward) will one day emerge across the bay from San Francisco.  The entire US West Coast is a dynamic construction zone.  In the words of Kirk Johnson, Director of the Smithsonian’s Museum of Natural History, “the West Coast is the most recent addition in the great continental construction project that built North America.”

 Strings of islands [moved] up from the Pacific, smacking into North America over millions of years.  These traveling land masses radically reshaped our Pacific coastline…. It was a titanic geological logjam that grafted thousands of miles of new coastline onto our continent, and still had enough power to push up the spectacular coastal mountain ranges of Alaska and British Columbia.

— Kirk Johnson (http://www.pbs.org/wgbh/nova/earth/making-north-america.html#north-america-origins)

The earth is dynamic and restless, ceaseless and ever-shifting, though humans may be excused for not seeing it… in terms of deep time the human lifespan is the length a camera click — as such, we never see more than one frame of an otherwise fluid process.  We see a screen grab and can’t comprehend that it’s a movie; it is difficult to appreciate any dynamic quality of something fluid by just one picture.  The ball is in mid-air and the skater is in the midst of a triple-lutz; we just lack the wherewithal or lifespan to comprehend the motion.  For example, the continents that we call “North America” and “South America” didn’t even touch until three million years ago… in geological time about a second ago.

A sad consequence of these discoveries is that there was no Atlantis, nor could there have been, outside of Plato’s imagination.  Plate tectonics provides the irrefutable proof.  Even I believed in the existence of Atlantis in my youth, but the simple fact is that the past twelve thousand years (and that is the timetable that would be in question) only represents about a thousand feet of movement in terms of plate tectonics, not quite enough space for even a peninsula (much less an island or continent) to arise or sink between Europe, North America OR the Mediterranean in that amount of time, and really at this point pretty much all of the continental drift over the past 300 MILLION YEARS can be easily reconstructed and accounted for.

Today the closest distance from North America to Europe is 2100 miles… our entire existence as a species is so brief that it only accounts for the most recent THREE miles of that distance.  Three miles of twenty-one hundred:  that’s 0.15%.  Sure, there have been ice ages in the past several tens of thousands of years — the most recent of which created land bridges across various landmasses and accommodated the spread of our species — but PLATE TECTONICS plays out on a scale of not tens of thousands of years but tens of millions, a scale so gradual that it has never once had a chance to impact our species, and in fact went completely unnoticed by humans before our technology developed to observe it in the 1940s.

Alfred Wegener was a meteorologist who first proposed the idea of “continental drift” in 1912.  Intrigued by the appearance of matching continental coastlines, Wegener suggested a correlation between the gradual movement of icebergs and and continents, only to find an “icy” reception by geologists.  But much like Darwin proposing evolution a century before the discovery of DNA, Wegener’s idea was generally correct even if he could not fathom the mechanics behind the process.  In the mid-20th century the world’s sea floors were mapped, uncovering volcanic ridges and rifts that formed stitched outlines spanning the globe.  The entire surface of the earth was revealed to be a patchwork of tectonic plates — eight major plates and several smaller ones.  These plates are forever on the move, not unlike conveyor belts or escalators forever on the move, with one end continually subducting at its boundary and new crust emerging from the other end, and all averaging at the speed at which fingernails grow.  Europe and North America continue to pull apart at the rate of about an inch a year.  The average human lifespan will see the two continents moving apart roughly six feet.

Today, if I were to drive a few miles to the east I’d find the beach there, as I would yesterday, and the day before that, and even last year, or the day I moved here in 1989.  But as imperceptible as it would seem in 1989, that beach was one to two feet closer to the African continent than it is today.  Multiply the inches by millenia and 250 million years ago, instead of the familiar beach I would have found Senegal there, along with the entire expanse of Africa.  The Atlantic Ocean was born 200 million years ago, before then I could have walked from my house to West Africa in a single afternoon.  Or turning north I could have gone to hike the Appalachians, which then at their peak would have been stunningly tall and volatile… truly, not unlike an Argentine mistress.

Nor was Pangaea the starting point.  An inch a year may be a slow process, but the earth is truly ancient; at 4.56 billion years this rate of movement equates to some 72,000 miles, a distance that would circle the globe nearly three times.  That would equal a cross-country trip across the United States 25 times over.  Driving a car at 60 miles per hour, it would take 50 days to cover this distance without stopping.  It’s just under one third of the distance to the moon, but here’s the rub: the moon itself is moving away from the earth at an average of one inch a year, so at the rate of continental drift you’d never catch it!

There were probably no fewer than five super-continent arrangements over the course of the earth’s history, Pangaea being just the most recent.  Rodinia is the name given to the super-continent predating Pangaea (circa 1.2 billion years to 750 million years ago), and Columbia (also referred to as Nuna) the super-continent before Rodinia (circa 1.8 to 1.5 billion years ago), with each preceding continental arrangement less clear to us and more speculative than the next, until the very earliest arrangements fade into complete speculation.

Importantly, Pangaea was the only supercontinent in history recent enough that it would have seen life on land.  Pangaea was green, it saw dinosaurs, it saw life before dinosaurs, it saw almost the entirety of vertebrate history out of the water.  Rodinia and all previous arrangements predated complex life and an ozone layer; as such these previous supercontinents were featureless cratons, lifeless landscapes baking in an atmosphere of ultraviolet radiation.

 Sadly, Wegener didn’t live long enough to see his theory vindicated.  In 1930 he went on an expedition to Greenland.  There, in temperatures of minus sixty he died of cold and exhaustion.  He was buried on the ice.  Because of continental drift, his body is now two meters further away from home.

— Michael Mosley, The Story of Science Episode 3: How Did We Get Here, BBC, 2010

History Channel

History Channel

The Oldest Star in the Sky


, , ,

     Twinkle, twinkle, little star,
     How I wonder what you are!
     Up above the world so high,
     Like a diamond in the sky.

The lyrics above are from an 1806 poem by Jane Taylor; she could have saved herself some time by simply analyzing the star’s spectrogram.  How can we know the content of stars that are thousands of light years away?  The answer is deceptively simple:  You can see what they are made of by literally, looking at them.

Stellar spectroscopy is a science discovered a few decades after Taylor’s poem.  Quite simply, every star’s light shows its elemental content when broken down spectrographically.  Our sun is a typical “Population I” star, about 4.6 billion years old, and here’s what its spectrograph looks like:


The black lines are called absorption lines, and they display the metallurgical content of the star.  It is, in essence, a star’s “bar code,” and it is as distinctive as a human fingerprint.  The stars that preceded our sun would have similar, but not identical characteristics, as will stars that eventually arise out of the death of our sun.  I’ve used the term “omnis cellula e cellula” before; it means every cell comes from a previous cell.  Here’s a variation on that same theme: “omnis stella e stella,” every star comes from a previous star.

The old marketing refrain suggests that a diamond is forever.  Well, it might be less romantic, but you know what is genuinely forever?  Hydrogen atoms.  In the first stellar generation after the Big Bang the only material elements that existed in the universe were gases of hydrogen and helium.  The first stars to light up the early universe were deep blue pure hydrogen stars.  If you ever want to start your own universe, all you’ll really need is hydrogen and gravity; given enough time the rest will follow.  There was no metallicity in the early universe — in other words, no hard materials or any element higher than Three on the periodic scale.  The pure stars of the first generation seem to have lived only briefly, burned brightly, and would have been devoid of iron.  It was this first generation that created and expelled the first heavier-than-helium elements into the universe, contributing these materials into the next generation of stars.

In the words of Dr. Phil Plait:  “A star is basically a machine for turning lighter elements into heavier elements.”

 There are cycles to the universe.  Stars form, they live out their lives, they die, they blow off winds and they explode, seeding their material into gas clouds which then form new stars with heavier elements in them, which will repeat the cycle again.  So if you want to think about it that way, the universe is the ultimate recycler.

— Phil Plait, How the Universe Works, s04e01, Discovery Communications, 2015

Thirteen billion years of fusion, nucleosynthesis and occasional beta decay has only altered the composition of the material universe by two percent.  An overwhelming ninety-eight percent of the universe is still hydrogen and helium today.  Let’s say this again: there are 92 naturally occurring elements on the periodic table — two comprise an overwhelming ninety-eight percent of the whole, while the remaining 90 elements make up only the last two percent.  Oxygen, carbon, nitrogen, everything you can see and feel and experience on the earth has emerged out of this fractional two percent of atomic mutation.  Stars are hydrogen furnaces, and every atom from helium to uranium is a product of that creative kitchen.  Every supernova seeds the next stellar nursery, and as each generation of star is further enriched with carbon, nitrogen, oxygen, silicone and iron than the generation that preceded it, each generation becomes “dirtier.”  Hence, the less metallicity in a star, the earlier it is likely to have formed.

Enter SMSS J031300.36-670839.3.  Not the name I would have chosen, but I guess there’s already a Ralph.  In 2013 this innocuous star within our galaxy, some 6000 light years away and visible only in the Southern Hemisphere, caught the attention of Stefan C. Keller and the SkyMapper Southern Sky Survey.  It registered the smallest amount of iron ever detected in a star; there appears to be 10,000 times more iron in the earth’s core than there is in this star, and the star is a million times the size of our earth (source: Anna Frebel, http://afrebel.scripts.mit.edu/www/1726-2).  This spectrographic analysis discovered what is probably the cleanest and oldest star yet found in our sky, its age estimated at perhaps 13.6 billion years.

smss star

As Keller himself remarked:  “What we’re able to do with this star is, for the first time, say that there was only one star that preceded it.”  SMSS0313 seems to have formed from the debris from one of the very first stars in the universe.  In case you’ve ever wondered, the image above is the most ancient star we’ve found in the sky.

 Theoretical models predict that the truly first … stars [also known as ‘Population III’] were all massive, such that all of them would have died a long time ago.  [This SMSS0313] star is just a whisker away from the elusive Population III, but preserves the conditions in the early universe. Thus, we are getting here as close as one can hope to the moment of first light. 

— Professor Volker Bromm (http://newsoffice.mit.edu/2014/researchers-identify-one-of-the-earliest-stars-in-the-universe-0209)

Science Channel, Anna L. Frebel

Images:  Science Channel, Anna L. Frebel

Considering the Odds of Another Technological Species in the Galaxy


, , , , , ,

Flight has evolved separately several different times in the history of life on earth — insects, birds and mammals have all taken to the skies, and each class evolved the ability independently of one another.  Eyes have evolved several different times over the history of life on earth.  The jellyfish developed camera eyes not unlike the human eye, despite the fact that our two species are separated from one another genetically by some 600 million years.

But exactly how common is… intelligence?

It is a trait that seems to have evolved only once in the four billion year history of life on earth.  Like photosynthesis or like the evolution of the eukaryotic cell, intelligence seems to be an outlier, a wonderful spelling error in the recipe that changed something less tasty into the cookie.  And certainly, “intelligent life” is open to interpretation… chimps & apes, dogs, dolphins, elephants, crows and the aforementioned jellyfish all share intelligence, even down to the ability to manipulate tools.  And if an extraterrestrial were to come to earth judging on the curve, even our degree of intelligence might not impress.

But let us for a moment reconsider the question to focus on a species that is capable of a technological society.  In four billion years of life (and four and-a-half billion years of the earth’s existence) a species with an intelligence capable of producing a technological society has arisen only once.  Homo sapiens emerged out of the tangled tapestry of earlier Homo lines some 200,000 years ago, .005% of the history of life on this planet, or .0044% of the planet’s existence.  By any interpretation this is a fluke — it’s like throwing down a royal flush; it wouldn’t happen the first time you throw down the cards, but if you’re tossing down cards often enough the odds demand that eventually a perfect hand will come down… even if it does take four billion-plus years.

If a technological-capable species has existed on this planet for only a fraction-of-a-fraction of its existence, here’s the big question as we send out signals and carefully listen for reply from giant radio-telescopes… exactly how likely is it that we might find intelligence elsewhere?  Make no mistake, the building blocks for water-based life such as we find on earth seem everywhere in the universe.  Simple amino acids and water are found in abundance in asteroids and comets — and what are all rocky planets but accretions of these same asteroids and comets?  Comets and asteroids whiz around planetary systems like prepackaged freeze-dried, life-delivery kits!  But if it took three billion years for the eukaryote to evolve on earth; if it took four billion years for complex life to find its footing; if it took four and-a-half billion years for intelligent life to evolve and ponder life elsewhere… then finding intelligent life in the universe is a different measure entirely.  As NOVA’s 2014 episode entitled “Alien Planets Revealed” suggested:  “The probability of intelligent life evolving on another planet is perhaps the greatest unanswered question.  But there are hints on earth that intelligence might be the exception, not the rule.”

 Sharks… have ruled the seas for 400 million years, and yet their brains are no larger than peas.  How could it be that after 400 million years sharks have not developed a higher intelligence?…  That tells us something frightening about life elsewhere in the universe:  perhaps the intelligence of which we humans are so proud is not an attribute that is strongly favored in Darwinian evolution.

– Geoff Marcy (http://www.pbs.org/wgbh/nova/space/alien-planets-revealed.html)

And considering an intelligence capable of a technological civilization lengthens those odds considerably.  Just as no human is born an adult, we did not emerge as a species technologically aware.  We used tools from the beginning, but so did our ancestors millions of years ago, so at what point did we actually become a “technological species”?  This is certainly an elastic term open to interpretation, but let’s say for the sake of generosity that we’ve been technological since the invention of writing, some 4,500 – 5,000 years.  This also coincides with many of our early megalithic and monolithic structures, so this seems fair, if not overly kind.  By this metric, the planet earth has possessed an intelligent species capable of technological understanding for one one-millionth of the planet’s existence.  Let me stress than again.  One one-millionth — that’s equivalent to one second in 11 and-a-half days.  It’s like a lifelong shut-in across the street opening his blinds and peering outside for 42 minutes in his 80-year life.

Now apply the same metric to another technological society in the galaxy.  What is the likelihood of two 80-year old hermits (whose lives don’t even necessarily coincide!) peering out the window and spotting one another in the same 42 minute overlap?

But the likelihood would increase if there were more than two.  Since the Kepler telescope began finding exoplanets in 2009, estimates of habitable planets in the galaxy have reached from the stratosphere into the airless region of hyperbole, but if one were to consider a rather conservative estimate and begin building a template from that, let’s say that there are 10 billion habitable planets in our galaxy — and to be clear, “habitable” should not be confused with “inhabited,” the former means capable of hosting life, but not necessarily inhabited.  So imagine 10 billion habitable planets, and let’s just say for argument’s sake that those 10 billion aren’t quite as habitable as we thought at first flush and maybe only one out of ten actually does harbor life.  Now if even just 1 in 10 hosts life that would still be one billion inhabited planets in the galaxy.

But as we’ve seen it takes time to evolve intelligence; on earth it took four and-a-half billion years.  Factor in the same metric for a technological society as we’ve seen on earth and that means that only one one-millionth of a planet’s existence might see a species capable of technological advancements and capable of understanding communication.  So essentially, there’s a one-in-a-million chance of an inhabited (similarly-aged) world possessing a technological society right now.  That’s right, we have to divide our number by one million.  This winnows the field from one billion to 1000.

But not all planets are the same age; some in the Milky Way are older, but many are younger, and this is likely to further reduce the number.  One must also consider the idea that some intelligent species might exist on “water worlds” or never have left the water as life on earth eventually did, which means its society would be unable to take advantage of any “technology” as we would recognize it.

Other considerations:

  • Exactly how common is an over-sized moon like ours that was so crucial in stabilizing the earth’s tilt, allowing for long-term climate stability?
  • How common is photosynthesis, a necessary precursor for the production of free oxygen, itself a necessary precursor for complex, multi-cellular life?
  • How common is a churning-iron core fueling a magnetic field and plate tectonics?
  • And here’s perhaps the biggest one…  How common is it that water-based life actually leaves water?

All of the above are crucial thresholds, without any of which you would not be reading this.  What are we down to now… 100?  200?  Generously!

But even with such a low estimate of 200 technological civilizations spaced around the Milky Way galaxy right now, the same result dictates that in the larger observable universe there would still be some 40 trillion civilizations at this moment.  Forty trillion, right now.

Still, even if we could bridge the boundaries of physics and talk to them… well, could we really?

 We don’t have conversations with any other species on earth, with whom we have DNA in common!  To believe that some intelligent other species is going to be interested in us, enough to have a conversation…?  ‘Isn’t that quaint…!’

– Neil Degrasse Tyson (https://www.youtube.com/watch?v=K2wp5Qe3JV0)

I know, kind of a bummer to end with, but listen… chimps are between 96% and 98% genetically identical to us, and we can’t even achieve meaningful conversations with our closest existing relatives.  If we are unable to have a significant conversation with any other species on earth — and we’re related to them — then what would suggest a conversation is even possible with some species that is completely unrelated to us?


So You, Your Dog, Your Cat and Your Fish Share Much of Your DNA With an Oak Tree (A Life Primer, part 1)


, , , , , , , , , , , ,

 Omnis cellula e cellula.  (Every cell comes from a previous cell)

— Rudolf Virchow, 1855

As the HBO series Westworld phrased it so succinctly:  “Evolution forged the entirety of sentient life on this planet using only one tool… the Mistake.”  A simple error in the code, one failure in one transcription, otherwise known as a mutation, is the first turn of a Rubick’s Cube.  These simple errors in copying — given a template vast enough and a time table long enough, passed on through countless generations of genetic code — can ultimately result in all the diversity of life we see around us today.

“Over the course of four billion years, life has unfolded from minute specks indistinguishable from contemporary bacteria to great and integrated creatures, mushrooms and pterodactyls, whales and oak trees.”  — Franklin M. Harold, In Search of Cell History, p. 223

All mammals on the earth today can trace their lineage back to one little reptile living some 200 million years ago.  Before that all reptiles derived from one slightly divergent line of amphibian, and before that all amphibians derived from one slightly divergent line of fish.  And in this manner every class of life merges with another further back on the family tree.

The chordate lineage in a nutshell: Flatworm > Fish > Amphibian > Reptile > Mammal.

How closely related are we to the other branches on our family tree?  Well, the evidence is in the DNA.  On the whole, you share roughly…

  • 99.95% of your DNA with your parents (you are effectively a clone)
  • 99.9% of your DNA with any other person on earth
  • 96-98% of your DNA with chimps
  • 90% with cats
  • 82-84% with dogs (sorry, dog lovers; cats win by a whisker)
  • 80% with cows

As a general rule, you may imagine any mammal to be 75% or more genetically identical to us, reptiles are under the 75% threshold, then amphibians in the 60s%, then fish, insects and invertebrates all lumping together by the low 50s.  Interesting aside:  we humans are about 50% genetically indistinguishable from a banana.  Yes, you are half banana.  Or a banana is half you… I’m not sure which is more disturbing.  But this is why our bodies will more easily digest a hamburger than a salad… frankly, the hamburger is more closely related to us than the salad.  Thus it is easier for our bodies to turn the cow into human than it is for our bodies to turn a plant into human, as it is further removed from us genetically.

Parallels Between Language and DNA, and the “Mutt Lineage” of Humans

Evolutiondescent with modification.  Every child looks like their parent, but not entirely; the detail is in that 50% of the .05% of difference!  Factor this tiny width of a difference by thousands of generations and billions of individuals and the variations stack up.  But evolution isn’t just about slight modifications in the genome.  It also benefits from — and even requires — isolation.  As we’ve learned in the past several decades, the tagline for evolution is less “survival of the fittest” as it is… “location, location, location.”  Evolution and the emergence of new species (or “speciation”) is much more often determined by simple genetic drift in isolation.  Put simply, things left alone will go their own way.

The evolution of DNA may be compared to the way in which human language tends to evolve.  Words appear or disappear in isolated communities, meanings change, colloquial terms and different slang might arise.  The Romance languages that dominate Europe today — Spanish, French, Italian, Portuguese and Romanian — all descended from one language… common Latin 1500 years ago.  All five of these languages emerged out of one in nothing more than the slow-cooker of time and geography.  In this way language families are not unlike similar genetic haplogroups:  cut off any small population from another small population and their DNA will gradually drift away from the other population, like any once-shared language will drift apart, until eventually — much as a different accents evolve into different dialects and finally into different languages — the two populations will head their way into entirely separate species if isolated long enough.  Human and chimp lines diverged some 6 million years ago, probably due to the geographic boundary presented by the Great Rift Valley.  The more recent split between the chimp line and the bonobo line 2 million years ago probably came about by a similar geographic split.  If one divergent group is not “brought back into the fold” (or cannot be, given geographic distances) that group simply continues to accrue genetic differences from any other, much like a fork in the road leads to paths that continue to diverge.

Our human species, too, seems to have experienced a split, as Homo heidelbergensis spread out of Africa some 800,000 years ago.  Those heidelbergensis populations in Europe gradually became what we classify as Neanderthals, those in Central Asia became Denisovans, while those remaining in Africa became… well, us.  Given enough time and geographic isolation, one human species just gradually became three.

Our species, Homo sapiens — anatomically modern humans — emerged in Africa about 200,000 years ago, and we began trickling out of Africa some 65,000 years ago.  The story of our species’ journey across the globe has taken place in a far shorter timetable (less than a tenth of that heidelbergensis timeline), not enough to produce different species, but as anyone may attest, certainly different regional and geographic features.  Some sixty-five thousand years ago ALL Homo sapiens were dark-skinned Africans.  And in case you’re wondering, we do know exactly which African population birthed the rest of the world… as geneticist Spencer Wells and Genographic Project has demonstrated, all populations on this planet not living in Africa today can trace their genetic markers to the distant ancestors of the Khoisan people (the San Bushmen and the Khoikhoi people).  The buck stopped here. This was the face of humanity some six hundred centuries ago, and these were the features that evolution would shape with great elasticity across the vast corridors of the earth.


Okay, so we left Africa, but perhaps the bigger surprise is that it took us so long to do it.  Essentially, three quarters’ of our history as a species was spent in Africa.  In the words of paleoanthropologist Chris Stringer: “African populations have the greatest [genetic] diversity, and people outside of Africa are essentially a subset of that variation.” (Lone Survivors, p. 182)  All populations living outside of Africa show a much lower genetic diversity, suggesting one or more “bottleneck effects,” as populations left Africa in very small bands.

A friend recently asked me why the rest of the world would show lower diversity, and I guess it might seem counter-intuitive to what we see reflected in our vastly different faces of today… but the reason Africa features far more genetic diversity is because the entire population of the world today living outside of Africa came from very small emigrations of isolated bloodlines.  Some 99% of Africa remained in Africa, the rest of the world today is made up of the 1% that left it.  It’s like pouring out a few jellybeans from a jellybean jar; the jellybeans that you’ve poured out of the jar are but a small sampling and would not include all the colors still residing in the jar.  Today a European and an Asian person may reproduce, but the resulting child still wouldn’t possess the diversity that an African child would from parents of neighboring villages.  In the case of those of us who left Africa, the sands of the DNA hourglass invariably narrow at that 65,000 year-old bottleneck, while the African continent continued for the past 65,000 years a more free exchange of the older genes going back 200,000 years or more.

 The majority of the genetic polymorphisms found in our species are found uniquely in Africans.  Europeans, Asian and Native Americans carry only a small sample of the extraordinary diversity that can be found in any African village.

— Spencer Wells, The Journey of Man, p. 39

 Given its larger human populations and its greater continuity of occupation, Africa has probably always had more genetic and morphological variation than other parts of the inhabited world, giving greater opportunities for biological and behavioral innovations to both develop and be conserved.

— Chris Stringer, Lone Survivors, p. 219

To revisit the parallels between the evolution of language and the evolution of DNA, the ancient “Click” languages spoken by the Khoisan people show a very high complexity, a complexity that mirrors the higher genetic diversity found in their DNA.  “English, for example, has thirty-one distinguishable sounds used in everyday speech (two-thirds of the world’s languages have between twenty and forty), while the San !Xu language… has 141.  While it is uncertain exactly which forces govern the acquisition of linguistic diversity, this figure is certainly suggestive of an ancient pedigree — in exactly the same way that genetic diversity accumulates to a greater extent over longer time periods.” (The Journey of Man, p. 56)  Dr. Chris Stringer makes a similar point in his Lone Survivors.  “Africa has the largest number and diversity of phonemes, and that number decreases as we move away from Africa.” (p. 218)  The language essentially mirrors the DNA.

But importantly, as various populations of our species migrated out of Africa they seem to have encountered their long-geographically separated cousins.  Neanderthals and Denisovans were still to be found in the Caucasus, Europe and Siberia, and as it turned out they had a role to play in our narrative.  As the DNA shows us… where populations met, they exchanged genes.

Currently, three human species have had their DNA sequenced and the genome decoded; the Homo sapien genome was finalized in 2003, the Neanderthal genome was published in 2010 and the Denisovan in 2012.  There may have been as many as 20 human species over the past two and half million years, but we have not yet retrieved DNA from other older species.  What we have learned in the past half decade though is that today’s world population shows an eclectic mix of archaic admixtures, added to the genetic mix within the past 50,000 years.  Anyone of European or Asian descent today averages 2-3% Neanderthal DNA, while Tibetan/Melanesian populations today may show Neanderthal DNA… and an additional 2-5% Denisovan DNA.

It is interesting to note that in the case of Neanderthal DNA, the roughly 3% in my genome is not necessarily the same as your 3%, so we actually have different puzzle pieces buried in our DNA.  If we were to compile together the DNA of everyone alive today, it is estimated that we could be able to reassemble roughly 30% of the entire Neanderthal genome.  Almost one-third of the Neanderthal genome is still active and being passed down through human populations today.  The former blanket statement that Neanderthals went extinct must be revisited; it is about as accurate as saying the dinosaurs went extinct while ignoring the bird singing at your window.

 Outnumbered 10 to one by modern humans, Neanderthals weren’t hunted to extinction by a supposedly superior species.  They were bred out, genetically swamped.

— PBS, Nova: Decoding Neanderthals, 2013

Broad swaths of the world’s population today carry a legacy of Neanderthal and Denisovan DNA across the globe, to continents neither species ever saw or imagined, not unlike the lost luggage of a dead passenger traveling the globe from terminal to terminal long after his burial!

Even the African continent retained pockets of archaic humans until relatively recently.  And wherever populations met, population-mixing seems to have occurred.  In the words of Chris Stringer, “sex happens.”  From his 2012 book Lone Survivors: “[Current] African populations also contain about 2 per cent of ancient genetic material, and this was input some 35,000 years ago, not from Neanderthals or Denisovans but from an unknown archaic population within Africa itself, which might have been separate from the modern human lineage for some 700,000 years.” (p. 250, emphasis added)  Whether heidelbergensis, erectus, or something more obscure or exotic within the pockets of isolation in Africa thousands of years ago, this finding just underscores the point that between Europe, Asia and Africa, gene pools separated by hundreds of thousands of years found a way to reconnect, even if only to re-establish a toehold.

The truth is, though we are the last human species on earth today, there was never any one lineage that resulted in the “us” of today.  The lines that lead to us were a porous, fluid and tangled tapestry; we are a mosaic of countless Homo and Australopith contributions.

 Some of these evolutionary experiments died out, others came together and interbred.  The ebb and flow of these genes through these groups was probably so complex that we may have to give up hope of discovering a simple linear evolution.

— PBS, Nova: Dawn of Humanity, 2015

There were probably no fewer than 20 Homo species in Africa over the past two million years, and it does seem likely that every one of them may have contributed at least a little bit to the bloodline we see today.  There are more than seven billion humans in the world now, but over the course of our ancestral history Australopithecines and early Hominins existed in perilously low densities.  We spent most of our history as an endangered species, not unlike gorilla populations today.  Historically speaking, think of these early human species as living in small prides, very much like chimp and gorilla communities today; small isolated collectives with little interest in outsiders other than an occasional sexual encounter and exchange of genes.  The smaller these isolated groups, the more any outsider DNA would impact the gene pool.

The descent of Homo sapiens is not unlike someone’s genealogy of today, where despite your single surname there was no single line that made YOU; if you go back far enough, genealogical lines criss-cross, separate and actually merge again.  There are multiple rivers that flow into the gene-pool of you, and the further back you trace the streams, the more their courses branch, split, reconnect, narrow and broaden in unexpected ways.  Much like an all-female line resulting in the extinction of a surname, the “extinction of a species” is an arbitrary designation, because the DNA simply carries on, contributing to an individual or species no matter what “name” they hold.  In this case, “extinction” simply juggles the proportions of the genetic mix.  Minute genetic contributions of archaic humans live on within our species today.  Homo sapiens may be “mostly derived” from Homo heidelbergenis in the same way that someone might be “mostly Irish” in heritage… the reality is more complicated.

But geography and climate have played a role too, in reshaping the human form outside of Africa in the past 65,000 years, as pockets of Homo sapiens spread across the globe.  If the gene pool is the hardware of DNA, then think of the genetic responses to geographic and climate change as the software patches.  As populations of our species moved north their skin gradually lightened to filter in more necessary vitamin D, and conversely those who had been north for thousands of years and then headed south again (Polynesian populations, and Indigenous Americans — both North and South) began darkening once more.  Different features arose in different regions and formed the physical and cultural distinctions we still see today.

 Genetic variation is geographically structured, as expected from the partial isolation of human populations during much of their history.


The worldwide human species that we see today is merely a snapshot; but it’s a perfect test-case of 65,000 years of genetic drift and varying degrees of geographic isolation.  Epicanthal eye folds, lactose-intolerance, skin tone and body stature; these are all just different evolutionary responses to different regional and cultural stimuli.  Humanity has spread from one continent to six, and in the process the continents have reshaped humanity.

My mom asked me once if evolution is over and I replied no; evolution never stops.  As long as reproduction through sexual means continues, “descent with modification” is an inevitable result.  After all, if each generation represents a single random twist of the Rubick’s Cube it’s only a matter of time before any two cubes would cease to look alike.  Consider for a moment blue eyes, a cosmetic detail well under a thousand generations old.  Blue eyes are a very recent adaptation; related to a distinct mutation located in the OCA2 gene that first appeared between 6,000 to 10,000 years ago, and we can actually pinpoint this mutation to the geography of the Black Sea region.  “Blue-eyed Humans Have a Single, Common Ancestor” was a headline in January, 2008.  Blue eyes are an example of a genetic variation recent enough to be caught in the act, as it were, between being a regional actor and global one, having played a prominent role in the theater of Europe and Scandanavia, but largely not yet introduced into indigenous populations of Africa, Asia or the Americas.  It might be odd to think of the spread of blue eyes in the same way one might think of the spread of a sexually transmitted disease, but it’s not an entirely inaccurate analogy.  Mutations are spread by sex.  The reality is that any two unrelated people of Western heritage today will likely share a common ancestor by 600 years ago, but a same person of Western heritage and a person of East Asian heritage might not share a common ancestor before 40,000 years ago.  So mutations like the one in gene OCA2 have lacked the opportunity or timetable to enter that larger worldwide population before the advent of air travel.

Eye color, earlobes, toe lengths & wisdom teeth; these are all genetic features that are variable today.  These are transitional mutations.  The more variegated a species the more successful it is, and the more variegated the more variations it produces.  Evolution is an accelerated cycle.  Repetition leads to variation, and the larger the pool of repetition (i.e., the population) the more variations inevitably appear.  In short, not only are we still evolving but that evolution is accelerating.

The Milestones that Made Us Human…

  • 8 million years ago — Genus gorilla (the ancestors of today’s gorillas) diverges from the line that would later produce chimps and humans.
  • 6 million years ago — The genetic lines that will comprise chimpanzees (genus Pan) and humans (genus Australopithecus) diverge from one another.
  • 3.2 million years ago — Today’s most famous Australopithecus specimen, Lucy, lives and dies without fanfare in East Africa.
  • By 3 million years ago — Studies examining the divergence between gorilla lice and human pubic lice suggest that the Australopithecines have lost most of their body hair by this point in time.
  • 2.8 million years ago — With a larger brain and a clear ability to fashion tools, genus Homo emerges out of the Australopithecus line.
  • Between 2.8 – 1.8 million years ago — A growing variety of Homo species, living in isolation much like communities of gorillas today, gradually absorb or simply replace Australopithecines throughout the African continent.
  • 2 million years agoHomo erectus emerges, beginning a reign of two million years… representing the longest-lived and most successful human species (to date!).
  • 1.8 million years agoErectus begins migrating out of Africa, marking what is probably the first human exodus out of Africa.
  • Between 900,000 and 700,000 years agoHomo heidelbergensis emerges, probably largely out of the African erectus/ergaster line.  Heidelbergensis populations spill out of Africa in another wave of human expansion.  Over the next several hundred thousand years these populations outside of Africa diverge into Neanderthals and Denisovans.
  • 200,000 years ago — Anatomically modern humans — Homo sapiens (yes, our species) — emerge in East Africa, probably largely out of the heidelbergensis populations in Africa.  If this seems recent, it is… Homo sapiens arise out of the most recent 7.1% of the history of the Homo lineage.  So, 92.9% of human history predates our species, and for that matter, writing (recorded history) has only existed for 0.178% of human history.  Oh, somebody drag me away from this calculator….
  • Between 100,000 – 90,000 years ago — There is spotty evidence to suggest Homo sapiens are trickling out of Africa in early migrations.
  • Between 70,000 – 65,000 years ago — The ancestors of today’s world-wide genetically traceable population leaves Africa, but this total exodus would probably not account for more than 1% of the total Homo sapien population at the time, the bulk of our species (some 99%) remains in Africa, absorbing archaic human populations.
  • 40,000 years ago — There are no fewer than 5 human species living outside of Africa at this point in time:  Homo erectus populations still surviving in eastern Asia, Denisovans in the central Asia, Neanderthals in Europe and Homo sapiens.  Genetic evidence suggests interbreeding occurs sporadically (if not regularly) between Homo sapiens and the other three species, effectively and eventually absorbing them.  The fifth human species, Homo floresiensis in Indonesia, is a confusing outlier that we in the 21st century don’t yet understand (Is this a Homo erectus variant?  Does it represent a heretofore unknown Australopithicine emigration to Asia?  Did Homo sapiens ever encounter it?  We simply don’t know).

The Bacteria That Built the World (A Life Primer, part 2)


, , , ,

 If all animals vanished, most bacteria would still live on, but if all bacteria disappeared, we would die quickly.


 Cells are the atoms of life, and life is what cells do.

— Franklin M. Harold, In Search of Cell History, p. 2

Above we were considering the evolution of our species, but now let’s go further back into the dressing rooms of life’s history.  You have parents, you have grandparents.  You have great-grandparents.  Like a repeating template, every cell in your body has a similar heritage.  Like a person, no cell is created out of the blue, it’s created from the cell that came before it.  “Omnis cellula e cellula.”  Every cell in your body represents an unbroken chain of cellular evolution going back four billion years.  Each and every cell that makes your body claims an unbroken heritage dating back one-third of the age of the universe.  And so does every cell in the grass of your lawn.  And your favorite pet.  Plants and animals share a same common ancestry; only unlike chimps — from whom we humans diverged about 6 million years ago  — plants and animals (or what would much later become plants and animals) diverged about 1.6 BILLION years ago, while both existed only at the microbial stage.  Really, any understanding of life on the scale of deep time starts with two basic facts, and let’s spend a moment or two to consider the first:

  • The mind-blowing majority of the history of life on earth was microbial.  Life on earth began some 4 billion years ago, and for 85% of those 4 billion years the most advanced life on the planet was single-celled bacteria, Prokaryotes, and later Eukaryotes — our ancient and primitive single-celled ancestors.

 Life appeared on Earth within a few hundred million years, but for billions of years it was restricted to single celled organisms.


In the 1970s our current scientific classification of the Three Domains of Life was established:  Bacteria and Archaea — the two Prokaryotic (and primary) domains, which have evolved very little in four billion years — and Eukarya, a minor offshoot whose cells seem more closely related to Archaea than Bacteria, though conversely the mitochondria within each cell seems more closely related to Bacteria than Archaea.

Eukarya is the domain that encompasses all complex life — all people, all plants, all animals, any and all life that is big enough for you to see (and some you can’t) comes from this small, modest and much-mutated branch.  We’re an extended footnote in a thousand-page book.  Eukarya represents a fraction of the life on earth, but it represents all the life that can be seen without a microscope.  Eyes, ears, limbs, backbones, hair, scales and toes — BRAINS, for that matter — bacteria and single-celled life that dominates the biosphere have none of these things; these are tools and inventions only made possible with the joining of cells into complex structures.  Eukaryotes have the freedom to link together cells like a kid tossing together Legos.  With Eukarya, complexity built upon complexity… in contrast to bacteria, which has spent four billion years essentially in a flatline.

The consensus today is that Eukarya represents an unlikely merging of the two Prokaryotic types that occurred only once.  Anyone of a certain age will remember those old Reeses Peanut Butter Cups commercials, speculating about the origins of the candy as two people walking on the street collide and exclaim, “Hey, you got chocolate in my peanut butter!”  “Hey, you got peanut butter in my chocolate!”  Pause.  Taste.  Both cry out, “YUM!”  The analog here is that the mitochondria is that peanut butter that gave an unexpected zing to the chocolate.

According to Wikipedia, “the most accredited theory at present is endosymbiosis… The endosymbiotic hypothesis suggests that mitochondria were originally prokaryotic cells, capable of implementing oxidative mechanisms that were not possible for eukaryotic cells; they became endosymbionts living inside the eukaryote.” (https://en.wikipedia.org/wiki/Mitochondrion)  In a symbiotic relationship, mitochondria ended up jettisoning much of its own genes, in the process becoming a power source for the eukaryotic cell.  In the words of Nick Lane, with mitochondria — the “batteries” that power each eukaryotic cell — “Eukaryotes have 100,000 times more energy per gene than bacteria, which allows them to support far larger and more genomes and make far more proteins from each gene.” (https://www.youtube.com/watch?v=PhPrirmk8F4)

Given how early life on earth began, it seems a strong argument that simple life is an easy threshold to cross… considering the fact that Eukaryotic life seems to have emerged only ONCE in FOUR BILLION YEARS may suggest that this may be the truly “narrow needle to thread.”  Achieving complexity, then, is akin to a lottery, that played endlessly and relentlessly over two and-a-half billion years, eventually resulted in winning numbers.  The sheer immensity of time eventually gives rise to the unlikely.  Once in four billion years is a very lucky stroke, but if time is on an immense enough scale all odds eventually fall away.

So to conclude this first point, it is only in this last 15% of life’s history on earth (600 million years) that significant multicellular life has arisen.  And if it seems sobering that 85% of life’s history was simple, unchanging and unicellular, here’s the second fact to consider…

  • 90% (NINETY percent!) of the history of life on earth was confined to the oceans alone.  For nine-tenths of the earth’s history, life was — by definition — a water event.  Even today, there are life forms that can exist in the absence of oxygen… there are life forms that can exist without sunlight… but importantly, there are NO LIFE FORMS ON EARTH that can exist without water.  WATER is the common solvent, water is the necessary cocktail-mixer through which all life on earth flows.  Life began, thrived and perpetuated for more than four billion years in the protective alchemy of the oceans.  The continents went unused for four billion years; barren and lifeless granite wastelands, untouched real estate for nine-tenths of earth’s existence.

And it’s not just that the land was simply without advantage, land was an impediment, a barrier… the environment out of the water was actively hostile to life.  Unlike today, the early earth (the earth of the Hadean era and Archean era) did not possess an ozone layer to permit life outside of the water.  Ultraviolet radiation would have cooked anything outside of the protective haven of the water.  What created the ozone layer?  An abundance of oxygen.  It was oxygen that allowed life to flourish on land, just as it was oxygen that allowed for the development of complexity.  Oxygen was the catalyst that allowed life to overcome both of the two hurdles listed above.  And where did this abundance of oxygen come from?  Well, in a carefully choreographed dance it was created by… life.  Life — and the conditions for life — are basically a feedback loop.

We live in an oxygen-rich atmosphere today, the earth’s cocktail is a 78/21 mix of nitrogen and oxygen, but the earth of the Archean Eon — which began about 4 billion years ago, and lasted until 2.5 billion years ago — was not an earth we would recognize, even if we could survive our first breath of methane and carbon-dioxide.  There was no free oxygen in the atmosphere to begin with.  Oxygen may be the third most abundant element in the universe, but it is rarely found by itself — oxygen is a sticky atom and is usually found bonded with other types of atoms.  The free oxygen in our atmosphere is a byproduct of life, and our ozone layer is a construction of biology.  Those countless sci-fi movies that feature people breathing air on uninhabited planets are overlooking simple physics:  an oxygen-rich atmosphere would not be likely without life.  Free oxygen is a bio-signature, and as we look out into the universe today any atmosphere containing it would be a strong indication of an inhabited world.

 The presence of oxygen in a planetary atmosphere is the litmus test of life:  water signals the potential for life, but oxygen is the sign of its fulfillment — only life can produce free oxygen in the air in any abundance.

— Nick Lane, Oxygen: the Molecule That Made the World, p. 2

The initial rise of oxygen in the atmosphere is referred to today as the Great Oxygen Catastrophe, and anyone who would challenge the notion that any one species can have an impact on the earth’s climate need only look back to what simple bacteria did two-and-a-half billion years ago.  The history of the earth is written in its geology; rocks tell the story of life on earth and can tell us a lot about changes in the atmosphere.  There have been several Extinction Events in the history of life on earth, but this first one was the only one caused by LIFE itself…

When blue-green algae ruled the world…

 The main source of oxygen in the atmosphere is oxygenic photosynthesis, and its first appearance is sometimes referred to as the oxygen catastrophe.


“Oxygenic photosynthesis seems to have been invented but once in all of earth’s long history, by precursors of the bacterial clade that we call the cyanobacteria.”  (Franklin M. Harold, In Search of Cell History, p. 158)  When, exactly, single-celled cyanobacteria first emerged is a little unclear in the fossil record, but what is clear is that by three billion years ago its colonies were prospering in shallow oceans.  Sure, bacteria, big deal… but this was bacteria that photosynthesized oxygen.  Photosynthesis.  In the words of Professor Brian Cox, “the task is so complex, that unlike flight or vision, which have evolved separately many times during our history, oxygenic photosynthesis has only evolved once.” (Wonders of Life, Ep.5, BBC, 2013)  What made cyanobacteria such a game changer was that in finding a way to utilize photosynthesis as a metabolic process it unleashed as a byproduct… oxygen.  In school one infers (somewhat erroneously) that plants create oxygen, but really, the only place oxygen atoms are created is within the fusion processes of stars — what plants do is release oxygen.  Plants take in carbon dioxide.  What is carbon dioxide?  It’s CO2 — one carbon atom bonded with two oxygen atoms.  Plants utilize the carbon atom but simply discard the oxygen atoms.  Plants free oxygen.  Multiply this effect by the trillions over millennia, and you have the makings of a world-changing atmospheric event.


Simply, cyanobacteria seems to have overwhelmed the biosphere.  Over a span of hundreds of millions of years, countless colonies and countless generations of this blue-green algae released into the atmosphere then-toxic (to other life forms) amounts of free oxygen.  A 2007 NASA study suggests the tipping point to have been about 2.5 billion years ago.  Other sources place it in the 2.3 BYA era.  Regardless, competing anaerobic organisms (methanogenic Archaea) that metabolized methane and found oxygen toxic went extinct in massive numbers.  It was basically a “metabolic format war,” as unfathomable generations of cyanobacteria polluted the biosphere of the earth over trillions of generations.  What resulted from this process was an earth atmosphere rich in oxygen… and ultimately rich enough that an ozone layer formed, making it possible, for the first time, for life to leave the water.

“The rise in atmospheric oxygen was far from uniform [over time],” as Franklin Harold observed (In Search of Cell History, p. 158).  “There seem to have been at least two discrete steps, the first around 2.45 billion years ago and a second right at the beginning of the Phanerozoic, 540 million years ago.”  In short:  “Atmospheric oxygen attained its present level just in time to support animal evolution.”  So, while complex life may have taken a long time in the record to appear, there’s a good argument to be made that it may have occurred at the earliest possible opportunity.

 Multiple lines of evidence from evolutionary biology, geochemistry, and systems biology build a compelling case for a central role of O2 in the evolution of complex multicellular life on earth.

Victor J. Thannickal, “Oxygen in the Evolution of Complex Life and the Price We Pay” (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2720141/)


It is somewhat startling (and humbling) to realize that it is only in the past 10% of the history of the earth that life colonized land.  And in essence, we never fully left the water; we are water-based life, and as such, we are still tied to the water… we simply evolved to carry it with us.  That’s why your body is made mostly of water, that’s why you’d die faster from dehydration than from hunger.  We are essentially walking, talking water balloons, and water balloons that require refilling every day… that’s what you do every time you go to the refrigerator and grab a Zima.  Hey, do they still make Zimas?  What about Snapple?  Well, anytime you feel the need to grab a soda or a juice, you’re just paying the price for the evolutionary bargain that our distant ancestors struck 400 million years ago as they left life’s native habitat.

As to our friends and relatives, the cyanobacteria, they still exist today, in smaller numbers.  Stromatolites, the rock-like physical products their colonies leave behind, dot shorelines in remote locations of South Africa and Australia.  Cyanobacteria — single-celled blue-green algae — once dominated the biosphere and changed the world with its waste.  It unintentionally exterminated much of its competition and fathered the dynasty that still stands today.  In the words of Neil DeGrasse Tyson, “you can argue that they were the most disruptive creature ever to live on the face of the earth.” (StarTalk, ep. 2.10, National Geographic, 2015)  Simply, our entire domain of life today — every plant and animal on earth right now — owes its existence to cyanobacteria and the Oxygenation Event.

 All of this, [our biosphere today exists] because of a waste product pumped out by microscopic bacterial slime, operating on an industrial scale in those ancient seas.  The empire of the stromatolites was, without doubt, the greatest in the history of the earth. Forget the Romans, the Persians, even the dinosaurs. These humble bacterial mounds dominated the planet for over 2,000-million years, and they engineered its greatest transformation.

— Richard Smith, http://www.pbs.org/wgbh/nova/earth/australia-first-years.html#australia-awakening