Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

The History and Future of Technology: Can Technology Save Humanity from Extinction?
The History and Future of Technology: Can Technology Save Humanity from Extinction?
The History and Future of Technology: Can Technology Save Humanity from Extinction?
Ebook1,419 pages16 hours

The History and Future of Technology: Can Technology Save Humanity from Extinction?

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Eminent physicist and economist, Robert Ayres, examines the history of technology as a change agent in society, focusing on societal roots rather than technology as an autonomous, self-perpetuating phenomenon. With rare exceptions, technology is developed in response to societal needs that have evolutionary roots and causes. In our genus Homo, language evolved in response to a need for our ancestors to communicate, both in the moment, and to posterity. A band of hunters had no chance in competition with predators that were larger and faster without this type of organization, which eventually gave birth to writing and music. The steam engine did not leap fully formed from the brain of James Watt. It evolved from a need to pump water out of coal mines, driven by a need to burn coal instead of firewood, in turn due to deforestation. Later, the steam engine made machines and mechanization possible. Even quite simple machines increased human productivity by a factor of hundreds, if not thousands. That was the Industrial Revolution. 
If we count electricity and the automobile as a second industrial revolution, and the digital computer as the beginning of a third, the world is now on the cusp of a fourth revolution led by microbiology. These industrial revolutions have benefited many in the short term, but devastated the Earth’s ecosystems. Can technology save the human race from the catastrophic consequences of its past success? That is the question this book will try to answer.
LanguageEnglish
PublisherSpringer
Release dateJul 27, 2021
ISBN9783030713935
The History and Future of Technology: Can Technology Save Humanity from Extinction?

Related to The History and Future of Technology

Related ebooks

Technology & Engineering For You

View More

Related articles

Reviews for The History and Future of Technology

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The History and Future of Technology - Robert U. Ayres

    © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

    R. U. AyresThe History and Future of Technologyhttps://doi.org/10.1007/978-3-030-71393-5_1

    1. Introduction

    Robert U. Ayres¹  

    (1)

    Emeritus Professor of Economics, Political Science, Technology Management, Novartis Chair Emeritus, INSEAD, Fontainebleau, France

    In 1979, Isaac Asimov published his last major work: A Choice of Catastrophes: The Disasters that Threaten Our World (Asimov, 1979 #219). It was on my library shelf and, given the title of this book , I thought I should reread what he wrote 40 years ago. Asimov started with a reminder that the word catastrophe from Greek, meant to turn upside down, at the end of a Greek play, whether tragedy or comedy. In his book , he considers five possible ways in which human life might end. The first of his typology was about how the universe itself might end, a trillion years from now. The second group was about how the solar system might end, e.g., due to the death of the sun from old age or being too close to a supernova explosion. The third class was about how the Earth might become uninhabitable, e.g., by glaciation, crustal shift, asteroid collision, or demagnetization.

    This book is mostly about Asimov’s fourth and fifth classes of catastrophes. The fourth class was about how human life on the Earth might become impossible, leaving other forms of life, but without us. The fifth class is about possible endings for what we call civilization, viz. condemning our descendants to a primitive life—solitary, poor, nasty, brutish, and short—for an indefinite period. Nuclear war or new diseases worse than COVID-19 could be catastrophes of the fourth kind. Overpopulation, famine, the next ice age, peak oil, the ozone layer, cloning, malignant artificial intelligence, and toxic industrial pollution would be in the fifth class.

    Curiously, climate warming and rising sea levels were not on Asimov’s list, published in 1979. Things he worried about, like running out of oil, no longer worry us, while things he did not think about, like the accumulation of greenhouse gases in the atmosphere, are almost upon us. My point is that—as many have pointed out—forecasting is difficult, especially about the future.

    Should I start this book by defining technology? Some readers may feel the need but I do not feel a strong compulsion to do it. Usually, a change in technology follows a discovery or an invention, and we know it when we see it. However, a case came up in the late stages of writing this book that gave me pause. I had to decide whether the use of money is a technology. It is a social system, for sure. There are technological components to the money system: coins, checks, credit cards, bar codes, banks, traveler’s checks, cash registers, cash machines, even cryptocurrencies. But money is more than that. It is also a measure of wealth, a unit of value, a medium of exchange, and an incentive to work. Is it technology? I finally decided (admittedly arbitrarily) that it is not.

    Another puzzle: Near the end of the eighteenth century, Thomas Malthus wrote an essay about the misfit between the power of population and the power of Earth to produce subsistence for man (Malthus, 1798 [1946] #3300). Why did Malthus, and most of his critics (including Karl Marx) fail to see what seems too obvious in retrospect? I mean that substituting machines (powered by steam, later by electricity) for human muscles could increase productivity by factors of hundreds, even thousands? Why did they not see that this increase in productivity, increasing the size of the pie, could make everybody better off, not only the capitalists?

    It is commonplace now, to say that technology drives the growth of material well-being—the global economic system—not to mention being a carrier of culture. Optimists think that the Industrial Revolution, especially the power of steam (and electricity), has proved that Malthus was wrong. An influential group of anti-Malthusian economists, exemplified by Julian Simon, has argued forcefully that the contest is over that technology will be our savior (Simon, 1977 #4641) (Simon, 1980 #4643). I also question that conclusion.

    The real societal problem back in Malthus’ time was essentially the same as the problem today, viz. inequality and maldistribution, not lack of production per se. A different problem confronts us now: The size of the pie may be approaching limits. Julian Simon et al may be wrong. For several decades, in the most industrialized countries, the rich have been increasing their share of the global pie at the expense of the rest of us. And the solution to that problem—if there is any solution—must be political, not technological. At the beginning of industrial revolution (c. 1750), India and China were as industrialized as Britain. By 1900, India’s level of industrialization was only 1% of the level in Britain (Ashton, 1949 #216) p. 129. What are we missing today that future historians will think was obvious?

    For decades, I have worked on the interfaces between science, technology, material flows, and—finally—economic theory again. A question that arises again and again is this: How much of our current malaise is caused by technological change? And how much does our future depend on developing new technologies to undo or counteract the old ones?

    In one sense, this book is simply a recapitulation of human history, where technology plays an important, but largely invisible role in determining what happened (or did not happen) at every stage. It provides both capability and constraint. We have started modifying nature itself and doing so in ways that threaten our long-term survival as a species. So, the book is an attempt to clarify, point by point, the factors motivating invention, the limits of invention and the new problems created by invention.

    This book is probably my last (and hopefully best) effort to explain the past and forecast the future of our civilization from the standpoint of technology. The reality is complex, but I think there are some major themes worthy of elaboration. One is the cross-fertilization—call it feedback—from technology to technology. Changing sources of useful energy (technically, exergy) from firewood to charcoal, then coal, whale oil, rock oil and natural gas, kerosene, town gas, to hydroelectricity, nuclear electricity, photovoltaics, and renewables are one constant. Candle light to LEDs illustrates this aspect. The history of medicine, from miasma to germs, from antiseptics to antibiotics to vaccines and monoclonal antibodies is another illustration. The so-called analog to digital revolution leading to artificial intelligence may be another theme. Are there more?

    References

    Malthus, T. R. (1946). An essay on the principle of population as it affects the future improvement of society. In L. D. Abbott (Ed.), Masterworks of Economics: Digest of Ten Great Classics (pp. 191–270). New York: Doubleday. Original edition, 1798.

    Simon, J. L. (1977). The economics of population growth. Princeton, NJ: Princeton University Press.Crossref

    © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

    R. U. AyresThe History and Future of Technologyhttps://doi.org/10.1007/978-3-030-71393-5_2

    2. Fire and Water : Technologies Extending Nature

    Robert U. Ayres¹  

    (1)

    Emeritus Professor of Economics, Political Science, Technology Management, Novartis Chair Emeritus, INSEAD, Fontainebleau, France

    Abstract

    This chapter is about how our ancestors came down from the trees, learned to walk (and run) on two limbs, leaving the other two free for grasping tools and (later) for actions requiring dexterity. Living in groups, on the ground, created a need for verbal communication, which required more brainpower and bigger brains. The preservation of fire from natural sources led to a technology for the making of fire and then to more uses of fire, not only for warmth in winter, but also to make pottery and, with that, cook foods that were otherwise inedible. The use of seeds for food was the key to agriculture and (later) animal husbandry and thence to water management and irrigation. The need for mobility led to the wheel and the ship.

    2.1 Bipedalism: Down from the Trees

    When Charles Darwin proposed his theory of evolution by natural selection (Darwin 1859), the differences between humans and other animals appeared to be so great that most people, especially religious leaders, could not imagine any gradual evolutionary path from our hominin ancestors to us. For that reason, among others, Darwin’s theory was rejected by many religious authorities in favor of a theory of special creation, symbolized (in the Judeo-Christian part of the world) by the Biblical story of Adam and Eve. However, now that we know a lot more about human history, the evolutionary path is no longer so hard to imagine.

    Most of the research on the missing link prior to the 1960s was focused on studying anatomical differences between various living species of 2-legged species (hominins), attempting to identify the nearest sister species of apes to humans and to infer the most likely appearance of the last common ancestor (LCA) of humans and other apes. Research covered anatomical features related to knuckle-walking on all fours, wrists suited to swinging under tree branches, opposable thumbs suited for grasping; finger dexterity, the thickness of tooth enamel, the flexibility of knees and feet, and the presence or absence of oestrus (absent in humans and Orangutans, present in all other mammals).

    Today, there is much more information about our genetic history. Textbooks in anthropology now usually start with the emergence of the genus Homo, and the species Australopithecus, six or seven million years ago. They lived mainly in Africa; they walked on two legs but could probably climb trees. They had small brains 380–430 cc in volume. Two or three million years ago Homo habilis, a physically smaller species with a larger brain (600 cc) emerged. Then came Homo erectus, about two million years ago. These apes spread over much of South Asia, including China. Their brains were around 1000 cc in volume, a major increase over Australopithecus, or habilis. A scenario to explain the evolution of these anatomical and physiological differences has also emerged. It is that climate change in Africa was responsible for forcing forest-dwelling hominins well equipped for climbing and living in or near trees, like the Sumatran Orangutans of today, to come down from their trees and learn to live part-time—later full-time—on the ground.

    To survive, they had to organize themselves into communities (tribes) to hunt and for defense against large predators with much better physical equipment. They lived in organized communities and used their hands for tasks other than locomotion. They learned how to capture and control fire of natural origin (lightning strikes). The larger brains facilitated the development of proto-language.

    One of the most interesting, and controversial, theories of human evolution is that proto humans learned to run long distances for purposes of persistence hunting (Heinrich, 2002 #8539) (Bramble and Liberman 2004) (Liebenberg, 2008 #8540). Several of the anatomical changes of our bodies are consistent with this theory and difficult to explain otherwise. They include hairlessness, cardio-vascular efficiency and ankle and pelvis modifications. As persistent hunters they learned to out-run faster four-legged animals by exhausting them. This mode of hunting is still practiced today in two locations, the Kalahari Desert and the Copper Canyon in Mexico. Improved cardio-vascular efficiency seems to have led to longer lifetimes. Longer lifetimes also enabled longer gestation periods for children.

    Around 350,000 years ago, Anthropologists postulate that Homo sapiens , H. neanderthalensis, H. denisova, H. floresiensis, and possibly others split off of the Homo erectus mainstream. These species—our competitors—had larger brains, up to 1450 cc. Leaping forward to 70,000 years ago, our species and the others were omnivorous hunter-gatherers. They lived in kinship bands, in or near forests, consuming fruits, nuts, edible fungi, eggs, rodents, rabbits, and other small animals that could be caught in traps. They also ate meat from occasional hunts of large animals such as wild pigs, deer, wild sheep, cattle, and even mastodons. Finally, and most important, they ate seed grains, beans, and roots that had to be cooked to be edible.

    Around 12,000 years ago, the glaciers were melting, and forests were growing on the newly exposed hills. Game was plentiful, and hunter-gatherers, including the ancestors of H. sapiens, were well-fed. But as the glaciers retreated, the climate became drier, and the forests were replaced by scrub and grasslands. This forced the hominins to choose between two options: to become peripatetic nomads moving from oasis to oasis (with cattle or sheep) or to build defensible settlements near a permanent source of water , plant crops, and domesticate other animals. Water was crucial. We cannot live more than a few hours without it. The hominins who chose permanent settlements created agriculture. I discuss that option later in this chapter.

    Hunting in groups, as well as defense, necessitated improved communication , using breath control and tongue movements as well as voice box control. These changes facilitated teaching, as well as communication and forward planning. Thinking ahead for immediate survival purposes also eventually forced our ancestors to expect, and fear, bodily death while believing in some sort of life after death. This led to the social practices of burial of the dead and respect for (or worship of) ancestors. The cerebral demands of verbal communication favored larger brains—the cerebral cortex—into the primary competitive advantage for our species. The brains of Homo sapiens now average 1.3 kg in weight, which is three times larger than the brains of our nearest surviving hominin relatives (Chimpanzees), with which we share 98% of our genes.¹ Yet the 2% of our genes we do not share with chimps separate us in important ways from all other animals.

    2.2 Pottery, Cooking, and Mobility

    Before cooking there had to be pottery. Scientists used to think pottery was invented after people started farming and began living in permanent villages. Over the last decade, however, scientists have unearthed pots and other containers in East Asia that are older than farming. The most ancient pieces of clay pots, found in Xianrendong Cave (China), are 19,000–20,000 years old, from the ice age. However, people in the Middle East were making simple clay pots 14,500 years ago. Fat was relatively rare in foods available to them. So cooking would have been important, since heat releases more energy from meat and starchy plants like potatoes. What the cave dwellers cooked is unknown, but ancient clam and snail shells littered the Chinese cave where the oldest pottery was found. Those cave dwellers might have boiled animal bones to extract grease and marrow, to extract the fat. It is thought that the cave dwellers might also have used the pots to brew alcohol.

    Use of yeast for making bread has an ancient history. Yeast microbes are probably one of the earliest domesticated organisms. Yeast is a single-celled living organism in the fungus domain, although it was not known to be alive until the nineteenth century. There are more than 1500 species of yeast known to mankind. Out of all the different varieties of yeast Saccharomyces cerevisiae is used in baking bread, making wine, and brewing beer since ages. This fungus is called the Bakers Yeast or Brewer’s Yeast.

    Yeast feeds on sugar and releases carbon dioxide and ethanol as its by-products. The carbon dioxide helps bread rise, makes them soft and fluffier, contributes to the flavor, and provides texture.

    Archeologists digging in Egyptian ruins found early grinding stones and baking chambers for yeast-raised bread, as well as drawings of 4000-year-old bakeries and breweries. In 1680, Dutch naturalist Anton van Leeuwenhoek (1632–1723) first microscopically observed yeast, but at the time did not consider them to be living organisms (Fig. 2.1).

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig1_HTML.jpg

    Fig. 2.1

    A pottery fragment from a Chinese cave. (Phys.​org)

    Richard Wrangham has postulated that cooking came much earlier. In fact, he theorized that cooking made the increase in human brain size since H. erectus, during the last 1.5 million years. He claims that evidence of cooking is unambiguous. (I am skeptical, because there is no evidence of pottery—needed for cooking—anywhere near that old.) His argument is inferential that H. sapiens brain uses up to 25% of the food energy consumed by the 75 kg human body. A larger brain requires more food . Therefore, H. erectus must have found more food to support the bigger brain.

    It is true that a given primary food source enables a given body to support a larger brain, with cooking, than without cooking. Cooking makes some indigestible foods digestible and increases the efficiency of the digestion process. That makes food gathering more efficient and makes more energy available for other activities. Gareth Wyn Jones argues that cooking was a major energy revolution (Jones, 2019 #8603). However, he does not explain how cooking was accomplished without pottery.

    Quite possibly the most important single accomplishment of Homo sapiens was that they not only learned how to capture, preserve, and utilize fires of natural origin. They learned how to make fire . Animals never did that. There is evidence of controlled use of fire in China by H. erectus, up to a million years ago, before the official appearance of Homo sapiens . The dates are highly uncertain, as is the picture below (Fig. 2.2).

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig2_HTML.jpg

    Fig. 2.2

    Homo erectus; diorama in the National Museum of Mongolia. (Copyright free Ulaanbaatar (sculptor anon))

    Our proto-human ancestors probably started by conserving fire from natural sources, such as brush fires started by lightning. The defensive value of a fire for a group of bipedal animals with small children to defend was obvious. Predatory animals from wolves to bears to wildcats of all kinds feared fire . The fire ’s other benefits, such as warmth in cold weather, the ability to harden spear tips, to make clay pots and to cook otherwise inedible foods, multiplied its value. Most of those innovations belong to prehistory. Much later (only during the last 5000 years) was heat from fire used (in primitive furnaces) to make bricks or ceramics or to smelt metals.

    Sometime in prehistory, primitive hominins learned to make fire from frictional heat. This probably happened during the last million years. It must have corresponded to the period when proto-humans migrated out of Africa into subtropical and temperate regions where seasonal temperatures vary significantly. The first invention needed for humans to be able to use fire consistently and safely is a fire starter. The simplest method used by Homo erectus is shown in Fig. 2.2, using the motion of the hands to create the necessary friction in the indentation.

    The bow drill (Fig. 2.3) is quite a bit more sophisticated. It was probably invented about the time of the invention of the projectile weapon we know as the bow and arrow. Instead of projecting an arrow, the cord is wrapped around it and used to twirl the arrow with its point in an indentation in a piece of dry wood. The friction from the twirling creates enough heat to start a fire in dry grass.

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig3_HTML.png

    Fig. 2.3

    Bow drill (modern version) as a fire starter. (Copyright free Model by Reddi, annotations by John Richfield)

    2.3 Keeping the Dark at Bay

    God said Let there be light ! and there was light . For God’s people, living on Earth it was not so easy. To the people of Sumer 10,000 years bce, light was everything good. Light was the gift of God, to mankind. Night is for predators and prowlers. Daytime is safe. Night is dangerous, Even now, light vs. dark carries enormous symbolic weight.

    The history of light touches virtually every technology today. But when the sun was not shining, the only practical source of light for human purposes was flame from a fire . Early lighting fuels consisted of olive oil, beeswax, fish oil, whale oil, sesame oil, nut oil, and similar substances. These were the most commonly used fuels until the late eighteenth century.

    The ancient Chinese of the Spring and Autumn period (771–476 bce) made the first practical use of natural gas for lighting purposes around 500 bc where they used bamboo pipelines to transport and carry both brine and natural gas for many miles. Chinese records dating back to 300 ad note the use of natural gas in the home for light and heat via bamboo pipes to the dwellings. The mausoleum of Qin Shi Huang (259–210 bc) contained candles made from whale fat. The word zhú was used as candle during the Warring States period (403–221 bc); some excavated bronze wares from that era feature a pricket (spike) thought to hold a candle .

    The Han Dynasty (202 bc–220 ad) Jizhupian dictionary of about 40 bc suggests that candles were being made of beeswax, while the Book of Jin (compiled in 648 ad) covering the Jin Dynasty (265–420) makes a solid reference to the beeswax candle in regards to its use by the statesman Zhou Yi (d. 322). An excavated earthenware bowl from the fourth century ad, located at the Luoyang Museum, has a hollowed socket where traces of wax were found. Generally, these Chinese candles were molded in paper tubes, using rolled rice paper for the wick, and wax from an indigenous insect that was combined with seeds.

    Wax from boiling cinnamon was used for temple candles in India. Yak butter was used for candles in Tibet. The early Greeks used candles to honor the goddess Artemis’s birth on the 6th day of every lunar month.

    Romans began making true dipped candles from tallow, beginning around 500 bce. While oil lamps were the most widely used source of illumination in Roman Italy, candles were common and regularly given as gifts during Saturnalia. After the collapse of the Roman Empire, trading disruptions made olive oil, the most common fuel for oil lamps, unavailable throughout much of Europe. As a consequence, candles became more widely used. By contrast, in North Africa and the Middle East, candlemaking remained relatively unknown due to the availability of olive oil.

    There is a fish called the eulachon or candlefish, a type of smelt which is found in the Pacific Ocean from Oregon to Alaska. During the first century ad, indigenous people from this region used oil from this fish for illumination. A simple candle could be made by putting the dried fish on a forked stick and then lighting it.

    The use of oil lamps and electric light is major part of the Industrial Revolution. I return to this later, in Chap. 15.

    2.4 Pain, Anesthesia, and Surgery

    Humans have been practicing various forms of pain management for thousands of years. Stone Age peoples, believing that pain and disease were punishments handed down by the gods, tried various techniques to banish the pain, such as presenting religious offerings and sacrificing animals. They also used rattles, gongs, and other noise-making devices to frighten malevolent spirits out of a person’s body. Some Native American cultures sucked on pain pipes held against a person’s skin to extract the pain or illness, while South Americans practiced trepanation—the cutting of holes in the head to alleviate pain. (This required both special skill and very sharp instruments for making the holes.)

    Medicines derived from willow trees and other salicylate-rich plants have been part of pharmacopeia at least dating back to ancient Sumer. The Ebers Papyrus, an Egyptian medical text from ca. 1543 bce, mentions use of willow and myrtle (another salicylate-rich plant) to treat fever and pain.

    Willow bark preparations became a standard part of the materia medica of Western medicine beginning at least with the Greek physician Hippocrates in the fifth century bce; he recommended chewing on willow bark to relieve pain or fever and drinking tea made from it to relieve pain during childbirth. The Roman encyclopedist Celsus, in his De Medicina of ca. 30 ad, suggested willow leaf extract to treat the four signs of inflammation: redness, heat, swelling, and pain. Willow treatments also appeared in Diascopies’ De Materia Medica and Pliny the Elder’s Natural History. By the time of Galen around the year 200 ce, willow bark was commonly used throughout the Roman world and later after the Arab–Byzantine wars passed on to the Arab world as a small part of a large, growing botanical pharmacopeia.

    In Ancient Egypt, electric eels were taken from the Nile and laid on the wounds of patients—not too ridiculous when you consider that a similar technique, called Transcutaneous electrical nerve stimulation, or TENS is sometimes used today for lower back pain and arthritis aches. The Ancient Greeks, on the advice of Hippocrates, used willow bark and the chewing of willow leaves to help women during childbirth—and they were not too far off the mark. Willows contain a form of salicylic acid, the active ingredient of aspirin.

    The oldest operation for which evidence exists is trepanation in which a hole is drilled or scraped into the skull to expose the brain. Out of 120 prehistoric skulls found at one burial site in France dated to 6500 bce, 40 had trepanation holes. There is evidence from Russia of trepanation dated to approximately 12,000 bce. There is significant evidence of healing of the bones of the skull in prehistoric skeletons, suggesting that up to 50% survived the operation. Examples of healed fractures in prehistoric human bones suggest that setting and splinting were practiced by the Aztecs.

    Bloodletting is one of the oldest medical practices, having been practiced among diverse ancient peoples, including the Mesopotamians, the Egyptians, the Greeks, the Mayans, and the Aztecs. In Greece, bloodletting was in use around the time of Hippocrates. This practice has been abandoned in modern times, as it was based on a misunderstanding of the causes of disease .

    The Sumerians developed several important medical techniques: In Nineveh, archeologists have discovered bronze instruments with sharpened obsidian resembling modern-day scalpels, knives, trephines, etc. The Code of Hammurabi, one of the earliest Babylonian code of laws, itself contains specific legislation regulating surgeons and medical compensation as well as malpractice and victim’s compensation: For example, "If a physician makes a large incision with an operating knife and cure it, or if he opens a tumor (over the eye) with an operating knife , and saves the eye, he shall receive ten shekels in money ."

    In the first monarchic age of Egypt (2700 bce) at treatise on surgery was written by Imhotep, the vizier of Pharaoh Djoser. So much was he famed for his medical skill that he became the Egyptian god of medicine . Other famous physicians from the Ancient Empire (from 2500 to 2100 bce) were Sachmet, the physician of Pharaoh Sahure and Nesmenau, whose office resembled that of a medical director. On one of the doorjambs of the entrance to the Temple of Memphis, there is the oldest recorded engraving of a medical procedure: Circumcision and engravings in Kom Ombo, Egypt, depict surgical tools.

    The most important discovery relating to ancient Egyptian knowledge of medicine is the Ebers Papyrus (1550 bce) named after its discoverer Georg Ebers. It includes recipes, a pharmacopeia, and descriptions of numerous diseases as well as cosmetic treatments. It mentions how to surgically treat crocodile bites and serious burns, recommending the drainage of pus-filled inflammation but warns against certain diseased skin. The Edwin Smith Papyrus (1600 bce) is a manual for performing traumatic surgery and gives 48 case histories. The Smith Papyrus describes a treatment for repairing a broken nose and the use of sutures to close wounds. Evidence has been found that people of Indus Valley Civilization knew how to drill holes in teeth as far back as 7000 bce.

    Sushruta (c. 600 bce) is considered as the founding father of surgery. He authored series of volumes in Sanskrit known as the Sushruta Samhita. It is one of the oldest known surgical texts, and it describes in detail the examination, diagnosis, treatment, and prognosis of numerous ailments, as well as procedures on performing various forms of cosmetic surgery, plastic surgery, and rhinoplasty.

    In The Iliad Homer names two doctors, the two sons of Asklepios, the admirable physicians Podaleirius and Machaon and one acting doctor, Patroclus. Because Machaon is wounded and Podaleirius is in combat Eurypylus asks Patroclus to cut out this arrow from my thigh, wash off the blood with warm water and spread soothing ointment on the wound.

    The Hippocratic Oath, written in the fifth century bc, provides the earliest protocol for professional conduct and ethical behavior a young physician. Works from the Hippocratic corpus include On the Articulations or On Joints, On Fractures, On the Instruments of Reduction, The Physician’s Establishment or Surgery , On Injuries of the Head, On Ulcers, On Fistulae, and On Haemorrhoids.

    Alexandrian surgeons were responsible for developments in ligature (haemostasis), lithotomy, hernia operations, ophthalmic surgery, plastic surgery, methods of reduction of dislocations and fractures, tracheotomy, and mandrake as anesthesia. Most of what we know of them comes from Celsus and Galen of Pergamum. Galen’s On the Natural Faculties, Books I, II, and III, described very complex surgical operations. He was one of the first to use ligatures in his experiments on animals. Galen is also known as The king of the catgut suture.

    The Sumerians are said to have cultivated and harvested the opium poppy (Papaver somniferum) in lower Mesopotamia as early as 3400 bce, though this has been disputed. The Sumerian goddess Nidaba is often depicted with poppies growing out of her shoulders.

    About 2225 bce, the Sumerian territory became a part of the Babylonian empire. Knowledge and use of the opium poppy and its euphoric effects thus passed to the Babylonians, who expanded their empire eastwards to Persia and westwards to Egypt, thereby extending its range to these civilizations. Apparently, opium was known to the Assyrians in the seventh century bc. The term Arat Pa occurs in the Assyrian Herbal, a collection of inscribed Assyrian tablets dated to c. 650 bce. It may be the etymological origin of the Latin "papaver."

    The ancient Egyptians had crude analgesics and sedatives, including possibly an extract prepared from the mandrake fruit. The use of preparations similar to opium in surgery is recorded in the Ebers Papyrus, an Egyptian medical papyrus written in the Eighteenth dynasty. However, it is questionable whether opium itself was known in ancient Egypt. The Greek gods Hypnos (Sleep), Nyx (Night), and Thanatos (Death) were often depicted holding poppies.

    Prior to the introduction of opium to ancient India and China, these civilizations pioneered the use of cannabis incense and aconitum. c. 400 bce, the Sushruta Samhita (a text from the Indian subcontinent on ayurvedic medicine and surgery) advocates the use of wine with incense of cannabis for anesthesia.

    In China, instruments resembling surgical tools have also been found in the archeological sites of Bronze Age dating from the Shang Dynasty, along with seeds likely used for herbalism. Hua Tuo (140–208 ad) was a famous Chinese physician during the Eastern Han and Three Kingdoms era. He was the first person to perform surgery with the aid of anesthesia, some 1600 years before the practice was adopted by Europeans. Bian Que (Pien Ch’iao) was a miracle doctor described by the Chinese historian Sima Qian in his Shiji. Another book , Liezi (Lieh Tzu), describes that Bian Que conducted a two-way exchange of hearts between people. This account also credited Bian Que with using general anesthesia before Hua Tuo, but the author may have been compiling stories from other works. Nonetheless, it establishes the concept of heart transplantation back to around 300 ad (Fig. 2.4).

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig4_HTML.jpg

    Fig. 2.4

    Hua Tuo (c. ad 145–220).Woodblock printing by Utagawa Kuniyoshi of Hua Tuo. (Wikipedia)

    The first attempts at general anesthesia were probably herbal remedies administered in prehistory. Alcohol is the oldest known sedative; it was used in ancient Mesopotamia thousands of years ago.

    Bian Que (c. 300 bc) was a legendary Chinese internist and surgeon who reportedly used general anesthesia for surgical procedures. It is recorded in the Book of Master Han Fei (c. 250 bc), the Records of the Grand Historian Records of Three Kingdoms (c. ad 270), and the Book of the Later Han (c. ad 430), Hua Tuo performed surgery under general anesthesia using a formula he had developed by mixing wine with a mixture of herbal extracts he called mafeisan. Hua Tuo reportedly used mafeisan to perform even major operations such as resection of gangrenous intestines. Before the surgery, he administered an oral anesthetic potion, probably dissolved in wine, in order to induce a state of unconsciousness and partial neuromuscular blockade.

    The exact composition of mafeisan, similar to all of Hua Tuo’s clinical knowledge, was lost when he burned his manuscripts, just before his death. The composition (100 bc), and the Book of Master Lie (c. ad 300) that Bian Que gave two men, named Lu and Chao, a toxic drink which rendered them unconscious for 3 days, during which time he performed a gastrostomy upon them.

    Hua Tuo (c. ad 145–220) was a Chinese surgeon of the second century ad. According to of the anesthetic powder was not mentioned in either the Records of Three Kingdoms or the Book of the Later Han. Because Confucian teachings regarded the body as sacred and surgery was considered a form of body mutilation, surgery was strongly discouraged in ancient China. Because of this, despite Hua Tuo’s reported success with general anesthesia, the practice of surgery in ancient China ended with his death.

    The name mafeisan combines ma fei (meaning "boiling or bubbling") and san (meaning to break up or scatter, or medicine in powder form). Therefore, the word mafeisan probably means something like cannabis boil powder. Many sinologists and scholars of traditional Chinese medicine have guessed at the composition of Hua Tuo’s mafeisan powder, but the exact components still remain unclear. His formula is believed to have contained some combination of:

    bai zhi (Angelica dahurica),

    cao wu (Aconitum kusnezoffii, Kusnezoff’s monkshood, or wolfsbane root),

    chuān xiōng (Ligusticum wallichii, or Szechuan lovage),

    dong quai (Angelica sinensis, or femaleginseng),

    wu tou (Aconitum carmichaelii, rhizome of Aconitum, or Chinese monkshood),

    yang jin hua (Flos Daturae metelis, orDatura stramonium, jimson weed, devil’s trumpet, thorn apple, locoweed, moonflower),

    ya pu lu (Mandragora officinarum)

    rhododendronflower, and

    jasmineroot.

    Others have suggested the potion may have also contained hashish, bhang, shang-luh, or opium.

    Hua Tuo may have discovered surgical analgesia by acupuncture and that mafeisan either had nothing to do with or was simply an adjunct to his strategy for anesthesia. Many physicians have attempted to recreate the same formulation based on historical records but none have achieved the same clinical efficacy as Hua Tuo’s. In any event, Hua Tuo’s formula did not appear to be effective for major operations. Other substances used from antiquity for anesthetic purposes include extracts of juniper and coca.

    By the eighth century ad, Arab traders had brought opium to India and China. Arabic and Persian physicians may have been among the first to utilize oral as well as inhaled anesthetics. Ferdowsi (940–1020) was a Persian poet who lived in the Abbasid Caliphate. In Shahnameh, his national epic poem, Ferdowsi described a caesarean section performed on Rudaba. A special wine prepared by a Zoroastrian priest was used as an anesthetic for this operation. Although Shahnameh is fictional, the passage nevertheless supports the idea that general anesthesia had at least been described in ancient Persia, even if not successfully implemented.

    In 1000 ad, Abu al-Qasim al-Zahrawi (936–1013), an Arab physician, described as the father of surgery, who lived in Al-Andalus, published the 30-volume Kitab al-Tasrif, the first illustrated work on surgery. In this book , he wrote about the use of general anesthesia for surgery. c. 1020, Ibn Sīnā (980–1037) described the use of inhaled anesthesia in The Canon of Medicine . The Canon described the soporific sponge, a sponge imbued with aromatics and narcotics, which was to be placed under a patient’s nose during surgical operations.

    Ibn Zuhr (1091–1161) was another Arab physician from Al-Andalus. In his twelfth-century medical textbook Al-Taisir, Ibn Zuhr describes the use of general anesthesia. These three physicians were among many who performed operations under inhaled anesthesia with the use of narcotic-soaked sponges. Opium made its way from Asia Minor to all parts of Europe between the tenth and thirteenth centuries.

    By the Middle Ages, a variety of herbs were used, including theriac, a concoction prepared in a honey base with about 64 different compounds in it. Opiates have also been used for thousands of years, an analgesic originally derived from the opium poppy. Morphine, the active substance, was named after Morpheus, the Greek god of dreams. Opiates represented an entirely new class of pain relief. They were extremely powerful, but also very addictive.

    Throughout 1200–1500 ad in England, a potion called dwale was used as an anesthetic. This alcohol-based mixture contained bile, opium, lettuce, bryony, henbane, hemlock, and vinegar. Surgeons roused them by rubbing vinegar and salt on their cheekbones. One can find records of dwale in numerous literary sources, including Shakespeare's Hamlet, and the John Keats poem Ode to a Nightingale. In the thirteenth century, we have the first prescription of the spongia soporifica—a sponge soaked in the juices of unripe mulberry, flax, mandragora leaves, ivy, lettuce seeds, lapathum, and hemlock with Hyoscyamus. After treatment and/or storage, the sponge could be heated and the vapors inhaled with anesthetic effect.

    Alchemist Ramon Llull has been credited with discovering diethyl ether in 1275. Aureolus Theophrastus Bombastus von Hohenheim (1493–1541), better known as Paracelsus, discovered the analgesic properties of diethyl ether around 1525. It was first synthesized in 1540 by Valerius Cordus, who noted some of its medicinal properties. He called it oleum dulce vitrioli, a name that reflects the fact that it is synthesized by distilling a mixture of ethanol and sulfuric acid (known at that time as oil of vitriol). August Sigmund Frobenius gave the name Spiritus Vini Æthereus to the substance in 1730.

    2.5 Water Management and Farming

    It is important to reiterate a key fact: All metabolic functions, including digestion, excretion, sensing, processing sensory information—and brain work (thinking)—require metabolic work. More to the point, all metabolisms depend on chemical reactions that depend upon water . More to the point, all metabolisms depend on chemical reactions that depend upon water . Also, most metabolisms take place in water under tightly defined circumstances.

    All plants need water for photosynthesis. Photosynthesis is a process, powered by energetic photons in sunlight. In photosynthesis, as most schoolchildren know by now, carbon dioxide, and water are joined in the presence of the chlorophyll. The chlorophyll is a protein in algae and the leaves of plants, containing a pigment that traps the photons from sunlight. The photons split water molecules H2O into protons (H+) and electrons (e−) and oxygen (O2) molecules. The latter are released to become the oxygen that we breathe and on which more advanced aerobic life depends.

    There are three different carbon fixation pathways, known as C3, C4, and CAM. In C3, the energy released by splitting the water is used to react carbon dioxide with ribulose bisphosphate (RuBP, a 5-carbon sugar), yielding two molecules of 3-phosphoglycerate​ through the following reaction:

    $$ {\mathrm{CO}}_2+{\mathrm{H}}_2\mathrm{O}+\mathrm{RuBP}\to (2)\kern0.28em 3-\mathrm{phosphoglycerate} $$

    C3 is the most common of the three pathways, being responsible for 95% of all plant biomass on Earth, and most of the important food crops including important food crops, such as rice, wheat, soybeans and barley. Plants that survive solely on C3 fixation (C3 plants) tend to thrive in areas where sunlight intensity is moderate, temperatures are moderate, carbon dioxide concentrations are around 200 ppm or higher, and groundwater is plentiful. C3 plants lose up to 97% of the water taken up through their roots by transpiration. C4 and CAM plants have adaptations that allow them to survive in hot and dry areas, and they can therefore out-compete C3 plants in these areas.

    In short, both water and carbon dioxide are essential for all living organisms. In other words, our food is also dependent on the availability of water . Only a very small percentage of the water used by plants is actually required for photosynthesis. A far greater fraction is s needed for transpiration (heat removal). In C-3 plants, 600 g of water is evaporated (transpired) per gram of carbon fixation.

    Animals, like us, need water for digestion and for temperature control. More specifically, we humans and animals need water to drink, to digest food , and to carry metabolic wastes away as urine or through the skin as sweat or perspiration, depending on your preference. Our bodies are 70% water . All of the food we eat, whether animal or vegetable, derived directly, or indirectly from photosynthesis.

    By 10,000 bce, H. sapiens population had outgrown the food supply locally available for hunting and gathering, in several regions. People in North Africa, the Middle East and south Asia learned to farm, out of necessity, as game became scarcer and the margins of the forest kept retreating. What did this mean to our Homo sapienancestors? Roughly speaking, we divided ourselves into two branches.

    One branch chose the itinerant strategy, moving from oasis to oasis, but always guided by the need for water . They would stay long enough to consume most of the bananas, cocoanuts, or for their herds of cattle, sheep or goats to consume the grass, then move on. (Some of their descendants still live that way.) Eventually, population pressure in the lowlands forced the hungrier or more adventurous of our ancestors to look for greener pastures, metaphorically speaking.

    Grasslands, like the Pampas, offered another solution. This required plowing of soils with deep-rooted perennial grass species. It also led to the taming of animals and the invention of plows, yokes, and harnesses to enable large animals, like bullocks, to do the plowing and horses to carry the people. Thus, the rainfed grasslands were eventually settled, depending still on springs and spring-fed streams, but also on ponds for water storage.

    The other branch of humanity settled more permanently near a spring, a flowing river, or a coastal location near a large body of water . The latter group became fishermen, at first. Where both game and grass were scarce, our human ancestors learned plant seeds and wait for them to grow. They were probably part-time hunter-gatherers, at first, but they became farmers as time passed. This transition was easiest on the flat alluvial land created by annual silt deposits in river valleys. Those river valley lands were fertile, thanks to the annual spring floods, and there were no deep-rooted plants (perennial grasses and scrub) to make cultivation difficult.

    It was assumed, when I was a student, that farming became established because it was easier and more productive than hunting. Anthropologists and archeologists now know that the truth is otherwise. The evidence is the smaller sizes of the skeletons of agricultural workers as compared to their hunter-gatherer forbears. However, it is true that once farming got started in fertile areas, such as river deltas, food production did grow rapidly. In some years, there were surpluses. Storage of dry surpluses increased. The idea of planning for 2 (or 3 or 6) 6 years of surplus followed by a year of draught took hold. Farmers—or their rulers—learned to store grain (seed) crops from year to year. The grain became tribute to war leaders. Eventually, the agricultural communities supported larger populations than the forest dwellers.

    As populations grew, the need for water management grew in parallel. The earliest known use of irrigation technology dates to the sixth millennium bce in Khuzestan in the southwest of present-day Iran. Ancient Persia used irrigation as far back as the sixth millennium bce to grow barley in areas with insufficient natural rainfall. The Qanats, a system of vertical wells and gently sloping tunnels under a steep hill, were designed to capture and divert water from an underground stream. They were developed about 800 bce and are among the oldest known irrigation methods still in use today. They are now found in Asia, the Middle East, and North Africa. See Fig. 2.5.

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig5_HTML.jpg

    Fig. 2.5

    Qanat: A Persian water management system that still works. (Wikipedia)

    The noria, a water wheel with clay pots around the rim powered by the flow of the stream (or by animals where the water source was passive), first came into use at about this time among Roman settlers in North Africa. By 150 bce, the pots were fitted with valves to allow smoother filling as they were forced into the water .

    Canals and levees formed the basis of land irrigation and flood control in ancient Sumer. Located in the lower reaches of the Tigris and Euphrates Rivers in southern Mesopotamia, today’s southern Iraq, this is an area of scarce rainfall but major flooding in late winter and spring. From around 3500 bc and over the next two millennia, Sumerians pioneered control of the water flow and the development of agriculture whose produce would feed the populations of over 20 city-states. However, this process ended, finally, because of increasing salt concentrations in the soil.

    The southern Mesopotamian plains, where the Sumerians lived, appeared flat but like today, it enjoyed a changing seasonal landscape. In late winter and spring, snowmelt in the mountains to the north and east brought floods that carried silt and other sediments over more than 1800 km to the south. Branches of the lower Tigris and Euphrates rivers meandered and merged—anastomosed—over the plains, producing a changing pattern of river levees, turtleback—arched—islands, dune fields and marshes that shifted with the next flood. During the summer and fall, the soil was baked hard and dry by the sun and eroded by the wind.

    The Sumerians used natural levees, which are embankments created by river sediments as a river floods. They are structures adjacent to the river that taper to the landward side along a gentle slope. Levee widths during the Sumerian period were as wide as 1 km. River levels could vary between 4 and 6 m during floods. The levee crest could be as high as 10 m above the surrounding plains.

    Sumerians built up the natural levees, where necessary, by making piles of reeds impregnated with bitumen, from seepage of crude oil common in the Persian Gulf. Baked mud bricks, also soaked in bitumen, were placed on top of the reed base. This not only increased the height of river banks, but also protected them from erosion by water currents. During dry periods, Sumerians hoisted water in buckets over the levees to water the cultivated land. They also made holes in the levee walls, creating channels for the water to flow to adjacent fields.

    Irrigation was practiced in the Indus valley civilization, beginning around 4500 bc. This increased the size and prosperity of their agricultural settlements. They developed sophisticated irrigation and water storage systems, including artificial reservoirs at Girnar dated to 3000 bce, and a canalirrigation system from c. 2600 bce. Large-scale agriculture was practiced in the Indus Valley, based on the network of canals.

    Ancient Egyptians practiced basin irrigation using the flooding of the Nile to inundate land plots which had been surrounded by dykes. The floodwater remained until the fertile sediment had settled before the engineers returned the surplus water to the river. There is evidence that pharaoh Amenemhet III in the twelfth dynasty (about 1800 bce) utilized the natural lake of the Faiyum Oasis as a reservoir to store surplus floodwater for use during dry seasons. The lake swelled annually from the Nile floods.

    The Ancient Nubians developed another form of irrigation, based on a waterwheel-like device called a sakia. Irrigation began in Nubia sometime between the third and second millennia bce. It also depended upon the floods of the Nile River and its tributaries in what is now the Sudan.

    In sub-Saharan Africa, irrigation reached the Niger River region cultures and civilizations by the first or second millennium bce, also based on wet-season flooding and water storage. Evidence of terrace irrigation occurs in pre-Columbian America, early Syria, India, and China. In the Zana Valley of the Andes Mountains in Peru, archeologists have found remains of three irrigation canals. The earliest dates from the fourth millennium bce, and the others date from the third millennium bce and the ninth century ad. These canals provide the earliest record of irrigation in the New World.

    The irrigation works of ancient Sri Lanka, date from about 300 bce in the reign of King Pandukabhaya. They were developed continuously for the next thousand years, extending into southern India. See Fig. 2.6. In addition to underground canals, the Sinhalese were the first to build artificial reservoirs to irrigate thirsty paddy fields. Most of those irrigation systems still exist thanks to good engineering. The system was restored and extended during the reign of King Parakramabahu (1153–1186 ad). See Figs. 2.6 and 2.7.

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig6_HTML.jpg

    Fig. 2.6

    Irrigation in Tamil Nadu, India. (Wikipedia)

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig7_HTML.jpg

    Fig. 2.7

    Water gardens in Sigiriya, Sri Lanka. (Wikipedia)

    The oldest known hydraulic engineers of China were Sunshu Ao (sixth century bce) and Ximen Bao (fifth century bce) both of whom worked on large-scale irrigation systems. The Dujiangyan Irrigation System in Sichuan was devised by the Qin Chinese hydrologist and irrigation engineer Li Bing. It was built in 256 bce to irrigate a large area of farmland. That system still functions today. By the second century ad, during the Han Dynasty, the Chinese also used chain pumps which lifted water from lower to higher elevations. These were powered by manual foot pedals, hydraulic waterwheels, or rotating mechanical wheels pulled by oxen.

    Jang Yeong-sil, a Korean engineer of the Joseon Dynasty, working under King, Sejong the Great, invented the world’s first rain gauge (uryanggye) in 1441 ad. It was installed in irrigation tanks as part of a nationwide system to measure and collect rainfall for agricultural applications.

    The earliest known agricultural irrigation canal system in the area of the present-day USA was discovered in Marana, Arizona (adjacent to Tucson), in 2009. It dates to between 1200 bc and 800 bc predating the current Hohokam culture by 2000 years. In North America, the Hohokams relied on irrigation canals to support a significant population in the Southwest by ad 1300. They constructed an assortment of simple canals combined with weirs. Between the seventh and fourteenth centuries, they built and maintained extensive irrigation networks along the lower Salt and middle Gila Rivers that rivaled the complexity of those used in the ancient Near East, Egypt, and China.

    In year 2000, the total fertile land on Earth was 2,788,000 km² (689 million acres) and it was equipped with irrigation infrastructure worldwide. About 68% of this area is in Asia, 17% in the Americas, 9% in Europe, 5% in Africa, and 1% in Oceania. By 2012, the area of irrigated land had increased to an estimated total of 3,242,917 km² (801 million acres), which is nearly the size of India. Irrigation has been a central feature of agriculture for over 5000 years and is the product of many cultures.

    Quite apart from creating canals and channels to carry water from one place to another, some of the most important engineering projects of the modern world are canals built to carry ships bypassing long oceanic routes. The Suez Canal and the Panama Canal are the two most obvious examples, although there are many smaller canals serving similar purposes. The economic importance of canals today is difficult to estimate, but undoubtedly, they play a major role in trade.

    2.6 Agriculture

    Agriculture, in the sense of planting, harvesting, and animal domestication, began independently in different parts of the globe and included a diverse range of biota. At least 11 separate regions of the Old and New World were involved as independent centers of origin, mostly in alluvial plains and river deltas. The earliest and most important of them was the Levant, especially the so-called Fertile Crescent in the Middle East, where the Sumerians lived.

    Wild grains were collected and eaten from at least 20,000 bc. From around 9500 bc, the eight Neolithic founder crops—emmer wheat, einkorn wheat, hulled barley, peas, lentils, bitter vetch, chickpeas, and flax—were cultivated in the Levant. Rye may have been cultivated earlier, but this remains controversial. Rice was domesticated in China by 6200 bc with earliest known cultivation from 5700 bc, followed by mung, soy, and azuki beans.

    Sugarcane and some root vegetables were domesticated in New Guinea around 7000 bc. Sorghum was domesticated in the Sahel region of Africa by 5000 bc. In the Andes of South America, the potato was domesticated between 8000 bce and 5000 bce, along with beans and coca. Bananas were cultivated and hybridized in the same period in Papua New Guinea. In Mesoamerica, wild teosinte was domesticated to maize by 4000 bce. Cotton was domesticated in Peru by 3600 bce. Camels were domesticated among the last, perhaps around 3000 bce.

    In the Middle Ages, both in the Islamic world and in Europe, agriculture was transformed with improved techniques and the diffusion of crop plants, including the introduction of sugar, rice, cotton, and fruit trees such as the orange to Europe by way of Al-Andalus. After the voyages of Christopher Columbus in 1492, the Columbian exchange brought New World crops such as maize, potatoes, sweet potatoes, and manioc to Europe and Old World crops such as wheat, barley, rice, and turnips, and livestock including horses, cattle, sheep, and goats to the Americas. Agriculture allowed food production per unit area to increase, which meant a given area could support a larger population. This allowed farming cultures to defeat hunter-gatherer cultures by sheer force due to larger populations. This, in turn, led to the spread of more agricultural societies across the globe.

    Jared Diamond points out that one collection of evidence for the difference in spread along geographic axes is the spread of domesticated crops (op cit). Many crops spread across Asia following a single domestication, while crops like cotton or squash were domesticated in multiple individual areas throughout Mesoamerica. This happened if exploitation of the crop spread too slowly for one domesticated version to take over the whole region.

    Even though the hunter-gatherers were healthier, on average, than the farmers who took over, it is a fact that they lost out in the survival race. In 10,000 bce, estimates put the hunter-gatherer population of Homo sapiens at 5–8 million. By the time of the Roman Empire, there were far fewer nomadic foragers, mostly in Australia, South America, and parts of Africa, like the Sahel.

    One reason for this outcome was that farming communities developed immunity to diseases that wiped out hunter-gatherer populations. Some diseases (like measles) are crowd diseases. They require a large population to sustain themselves because they act quickly: You either die or develop immunity. In order for the disease to sustain itself, there must be enough new babies born to contract the disease from those who have already developed immunity. Only agricultural communities could grow to the required population size. North America was populated by about 20 million Native Americans when Columbus landed in 1492. Within two centuries, 95% of the native population had died, most of them from infectious crowd diseases like measles and chickenpox, to which that most of us with European ancestry are immune.

    2.7 Extensions of the Legs: Mobility and Transport

    The evolution of human society sometimes brought about situations requiring greater individual mobility than two legs could provide. The first enhancement to mobility was probably a floating log, which gradually evolved into a barge, propelled by a pole—one sees this in parts of Africa today—followed after millennia by paddles, oars, and sails. Barges eventually evolved into boats and ships.

    Ships seem to have preceded wheeled vehicles, probably because navigable rivers (such as the Nile) preceded roads. The earliest historical evidence of boats is found in Egypt during the fourth millennium bce. Egypt was narrowly aligned along the Nile, totally dependent on it and served by transport on its uninterruptedly navigable surface below the First Cataract (at modern-day Aswān). There are representations of Egyptian boats used to carry obelisks on the Nile from Upper Egypt that were as long as 300 ft (100 m), longer than any warship constructed in the era of wooden ships. The nature of this cargo tells us that water transport for heavy cargo probably followed quarrying and construction activities.

    The Egyptian boats commonly featured sails as well as oars (Fig. 2.8). Because they were confined to the Nile and depended on winds in a narrow channel, propulsion by oarsmen was essential. Dependance on rowing—and galley slaves—continued into recent centuries, even in the Mediterranean. Most early Nile boats had a single square sail as well as one row (bank) of oarsmen. As Egypt grew several levels of rowers, one over the other, came into use, probably because it was difficult to maneuver very elongated single-level boats in the open sea. The later Roman two-level bireme and three-level trireme predominated, although more than a dozen banks of oars were used to propel the largest boats.

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig8_HTML.jpg

    Fig. 2.8

    An ancient Egyptian papyrus showing a boat on the Nile River. (https://​cdn.​britannica.​com/​53/​145253-050-D91E22CF/​papyrus-Egyptian-boat-Nile-River.​jpg)

    Navigation on the Mediterranean Sea began among Egyptians as early as the third millennium bce. Voyages to Crete were among the earliest. These were followed by voyages, guided by landmark navigation, northward along the coast to Phoenicia. Figure 2.9 shows a drawing of an seagoing ship from a royal tomb. It is totally unclear to modern eyes what the people on board were doing and why they ranged in size from very tall to very tiny. The Egyptians built a canal to carry boats from the Nile to the Red Sea. Still later, there were trading journeys (by sailing ships) down the eastern coast of Africa. According to the fifth-century bce Greek historian Herodotus, the king of Egypt (c. 600 bce) dispatched a fleet from a Red Sea port that returned to Egypt, via the Mediterranean after a journey of more than 2 years. Cretan and Phoenician voyagers gave increasing attention to the specialization of ships for trade.

    ../images/508247_1_En_2_Chapter/508247_1_En_2_Fig9_HTML.jpg

    Fig. 2.9

    Drawing of an Egyptian seagoing ship, c. 2600 bce, depicted on the bas-relief in the pyramid of King Sahure. (Courtesy of the Science Museum, London. https://​cdn.​britannica.​com/​05/​4605-050-DF3DB4D8/​Drawing-Egyptian-Sahure-ship-vessels-bas-relief-Abu-c-2600-bce.​jpg)

    The basic functions of the warship and cargo ship determined their design. Warships required speed, space for fighting men, and the ability to maneuver in any direction. Thus, long, narrow ships, propelled by banks of rowers (often galley slaves chained in place) became the standard for naval warfare. In contrast, trading ships sought to carry as much tonnage of goods as possible with a minimal crew, letting the wind on sails do much of the work. Hence, the trading vessel became wider with deeper draft. The trading vessel required greater freeboard (height between the waterline and upper deck level), as the wave motion on the larger seas could easily swamp low-sided galleys propelled by banks of oarsmen.

    The inventions of the wheel and vehicles–wagons or carts supported and moved around by round wheels–had a profound effect on human economy and society. Wheeled vehicles enabled trade networks. With access to wider markets, craftspeople could more easily specialize. Villages could evolve into towns and cities when there was less need to live close to the farms producing food . By the same token , wheeled carts facilitated periodic farmers markets. Finally, wheeled vehicles (pulled by horses) enabled war leaders to extend their range of control, and wars could be waged farther afield.

    The earliest evidence for wheel use is that of drawings on clay tablets, found nearly simultaneously throughout the Mediterranean region about 3500 bce. It was not only the invention of wheeled vehicles that drove these societal changes. Wheels are only useful in combination with suitable draft animals such as horses and oxen, as well as prepared roadways. The earliest planked roadway we know of, Plumstead in the UK, dates to about the same time as the wheel , 3700 bce. Cattle (oxen) were domesticated about 8000 bce and

    Enjoying the preview?
    Page 1 of 1