Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

For Your Information: About Information, the Universe and the Modern Age
For Your Information: About Information, the Universe and the Modern Age
For Your Information: About Information, the Universe and the Modern Age
Ebook709 pages8 hours

For Your Information: About Information, the Universe and the Modern Age

Rating: 0 out of 5 stars

()

Read preview

About this ebook

In recent times, physicists have come to appreciate information’s central role in the universe’s grand plan. That and the fact that an explicit understanding of the informational relationships involved may well be key to unlocking many of the universe’s deepest secrets. That makes the birth of both Computer and Information Science not only essential to the explosion of modern technological success, but also to our understanding of reality itself. In recognizing that, what unfolds is a story not only about Alan Turing and his pioneering colleagues, but also great thinkers like Albert Einstein, Michael Faraday, Ludwig Wittgenstein and others. It therefore pulls in much of modern history and touches on seminal events like the birth of the atomic bomb. It also hints at the reasons behind the various social and political divides we see in the world today. So, in many ways, the story of how we became more informed about information is also the story of the modern age.

What you will read of here is the role that information plays in that ongoing saga and many of the twists and turns that have brought us to where we are with information today. In it you will learn that, unbeknown to Turing and others, their work would not only help overthrow the Nazis and thaw the chilling atmosphere of the Cold War to come, but also echo down the ages to remain relevant in a conflict still raging today. That sees the Computer and Information Scientists at loggerheads as they fight to find a right and justifiable place for meaning in information’s definition.

About The Open Group Press
The Open Group Press is an imprint of The Open Group for advancing knowledge of information technology by publishing works from individual authors within The Open Group membership that are relevant to advancing The Open Group mission of Boundaryless Information Flow™. The key focus of The Open Group Press is to publish high-quality monographs, as well as introductory technology books intended for the general public, and act as a complement to The Open Group standards, guides, and white papers. The views and opinions expressed in this book are those of the authors, and do not necessarily reflect the consensus position of The Open Group members or staff.
LanguageEnglish
PublisherVan Haren Publishing
Release dateAug 12, 2024
ISBN9789401812269
For Your Information: About Information, the Universe and the Modern Age

Related to For Your Information

Related ebooks

Architecture For You

View More

Reviews for For Your Information

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    For Your Information - Philip Tetlow

    Part I: History and Science

    Illustration

    Chapter 1. Bring on the Information Cake

    Summary: First explains why information feels so familiar, then goes on to explain why our commonly held views might be somewhat superficial.

    "All you really need to know for the moment is that the universe is a lot more complicated than you might think, even if you start from a position of thinking it’s pretty damn complicated in the first place." — Douglas Adams

    The universe has a wonderful symmetry. A harmony. A geometry.

    If you want to know why, I mean really, really want to know why… try this:

    The upshot of it all: In the beginning everything was created, and, in the grand scheme of things, it took no time at all. Either that or everything just decided to show up, which, we suspect, it’s done before. Whichever way, those of us who have better things to do aren’t best pleased. Firstly, because the manual hasn’t been put back yet and, secondly, because everyone knows it would have been better just to make cake. Especially information cake.

    No, honestly, that’s it. No more, no less. The rest of this book will try to explain why.

    As you might have guessed, this unconventional insight is written in the style of Douglas Adams, the much-loved author of The Hitchhiker’s Guide to the Galaxy [420]. Most would know him as the whimsical storyteller who brought us the catchphrase Don’t Panic!, but he did more than that. Adams was a deep thinker with a rare talent for seeing the world differently. He could take an idea and explain it in ways so annoyingly obvious that a deep-seated humor would be released. Flying is learning how to throw yourself at the ground and miss [13], is one of his. Funny how it takes rocket scientists years of study to come to the same conclusion.

    If Adams and Albert Einstein had ever met, no doubt they would’ve become friends. Both had an insatiable, boyish curiosity, and both understood the true value of imagination. Indeed, you can almost hear Einstein coaching Adams in his famous quote:

    "Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand"[14].

    Magically, he and Adams applied the power of imagination to prise open the security of our understanding. They looked at the world and told us what they saw, not what they were expected to see. For them, convention was less about rules and more about an invitation to explore the unknown. In Einstein’s case, he taught us how to see the world differently, while Adams did the same, although through the lens of a fictional alternative universe. What matters though, is that their perspectives were unique and useful. They simply told us their story. How blissfully joyful is that.

    That’s all we will do here. We’ll tell a story from a new perspective, a perspective that focuses on a somewhat unexpected theme. The strange thing is that you’ll probably already be familiar with it.

    The Age of Information

    Instinctively, we think we know what information is. And so we should — we’re surrounded by it. Smartphones, social media, TV, advertising, even that annoying voice on your GPS nagging at you to turn left or right. At that level, information is very familiar to us. We depend upon it continually and exchange it regularly. We communicate, coordinate, and control using it. Ask anyone what information means to them and you’re likely in for a long sitting. How they keep in touch with their family. How they get their news. How they schedule their lives. How they this… How they that…

    The arrival of the internet hasn’t helped. Like some catalyst, it’s hastened our addiction. Now we no longer use information simply to aid and advise. Today, we covet and crave it; we cherish and caress it. We are as much a part of it as it is of us. It’s not only essential to our existence but to our very identity as human beings. In so many ways, it is us.

    Many would recognize this picture and acknowledge the significance that information plays in our lives. Younger readers might even see this as the norm and wonder how the world got by without computers and the Web. But we did, and those who can think back to before 1997 might remember a very different world.

    The World Wide Web

    1997 was notable for several reasons, but few would expect that its events would prove critical to the story of information. True, one or two milestones were reached that year that might seem linked, like Microsoft® becoming the world’s most valuable company [15], but these are really just distractions. 1997 was actually a tipping point for the world’s interaction with technology. It was the year that the World Wide Web saw mass take-up.

    The number of internet users worldwide had skyrocketed since the birth of the Web in 1989. When Sir Tim Berners-Lee submitted his original proposal for what would become the World Wide Web [16] in March of that year, he merely saw it as an attempt to keep track of the growing mess of electronic documents and data at CERN — the European Laboratory for Particle Physics in Geneva. To this day he’s still shocked his proposal was accepted, let alone the whirlwind of change it would bring. But accepted it was, and on December 20, 1990 the world’s first Web server sprang into life. The first ever website was published on August 6th, 1991 [17] and, in 1993, CERN made the World Wide Web available on a royalty-free basis to the general public.

    Adoption of the Web was slow at first and although the internet had 14 million users by 1993, only 130 websites had ever been registered. Widespread use didn’t really start until around 1995 [18] when studies suggest the Web grew by around 758%. Growth the following year was even more staggering, topping out at 996%. But most experts acknowledge that the increase of 334% in 1997 really saw the Web come into its own. That’s probably not just due to the number of online users, but also the effect of early adopters spreading the word and setting up the Web’s skeleton of foundational services, like search engines. They helped lower barriers to access and offered enough motivation for inexperienced users to experiment online.

    The fuse of the Web’s explosive growth was likely ignited in September 1994, when the National Center for Supercomputing Applications launched its Mosaic™ browser. Dubbed the internet’s killer application, it was the first program to display images in line with website text. Big names like IBM® and Microsoft entered the market the coming year and followed suit, shortly after being joined by the Netscape Communications Corporation — formed when one of Mosaic’s creators left to set up his own company. Over the next few years, a vicious battle broke out for market dominance, speeding up innovation and sucking in creativity. Alongside, many household names began to emerge.

    Amazon™ started trading in July 1995, for instance, followed by eBay™ in September of that year. Early search engines, like Yahoo™ and AltaVista™, also came online between 1994 and 1995, with Google™ starting somewhat late in 1998.

    1994 was also key. Berners-Lee left CERN and founded the World Wide Web Consortium, or W3C® for short, a group of technical freethinkers who had the foresight to recognize what the Web would become. They cared passionately about the Web’s neutrality and the openness of its underlying construction. As a result, while on the outside the Web evolved organically, its center was protected by a Jedi-like council of technical specialists. This combination of evolution and leadership helped nurse the Web as it emerged. The Consortium is still very much alive today, happy to hide behind the scenes, ever so gently nudging the Web in the right direction.

    But the modern-day story of information doesn’t end in 1997. By the turn of the millennium, the Web had grown to just over three million websites and, by the time Wikipedia™ was born in 2001, it had reached nearly 30 million. Around that time, and long before websites like Facebook™ took off, several pioneers were already talking seriously about the idea of a social Web. In truth, social interaction had always been at the Web’s core, but the early innovators built on previous experience to bring social networks to life. For example, as early as 2000, a team at the W3C had started work on the Friend of a Friend project [19], aimed at bringing like-minded individuals together online. Likewise, several similarly themed websites blinked into existence and then faded. But soon our need as social beings spilled over and the rush of social network sites stuck. History would have it that their success was serendipitous, but in fact they were a very obvious consequence of the Web’s need to evolve into something new. Websites like Friends Reunited™ led the charge in June 2000, then Facebook, and later Twitter™ in March 2006. These not only changed the way we interact, but they dramatically sped up change in most forms of natural language. Because of the need to type conversations, rather than speak them, online users tapped in underused characters like @ and # and re-purposed them to shorten the writing process. In truth, it was more of a borrowing from computer lingo than a re-purposing, but the point is that no one suggested or dictated it. Rather, it emerged through common consensus. It was a consequence of the digitalization of the world in which we now live. It was just as much a product of natural selection as each one of us is ourselves.

    Hardware, Humans, and Universal Darwinism

    As the Web’s software coiled itself around the human condition, the hardware supporting it didn’t stand still. Computer processors became more powerful while shrinking in size and cost. And all the time, the network of cables connecting the internet kept expanding. Smothering. Becoming umbilical in its support of our increasing greed for technological advance.

    That meant two things. First, the reach of global communications networks began to erode any meaningful notion of social boundaries, be they geographic, ethnic, political, or whatever. Second, the price of more than powerful enough computers easily fell within the reach of interested individuals. It was about commoditization lowering barriers. It was a freedom thing, a revelation and revolution in one, and in the hands of the common people. Where once it would have taken a roomful of gadgetry to connect to the internet, by the start of the 1990s a suitcase-sized machine would do. By the end of that decade, laptops had taken over, but the real shift came when hardware could comfortably fit into the hand. Then, in the early years of the millennium, mobile phones started to dominate and a cascade of unstoppable change had begun.

    Similarly, the history of speech over the airwaves is long and eventful, but its most recent chapter probably starts with the introduction of a mobile handset by Motorola® in April 1973. Back then such devices were unwieldy, with Motorola’s prototype weighing in at 1.1 kg and measuring a massive 23 cm long [20]. Other manufacturers joined in and by the late 1990s a vibrant market was up and running. As processors continued to shrink in size and increase in power, handsets not only became more manageable but gained in functionality too. First came trivial widgets like calculators and simple games, but by 1998, mobiles were accessing the Web. That opened opportunities for untold technical creativity, a fact not lost on those at the Apple™ corporation. In 2007, they changed the game with the introduction of the iPhone™. Not so much a communications device as an intimate personal computer, it was an instant success. It not only went on to become the most profitable consumer product in history [21], but transformed the mobile phone industry. Customers loved it, not least because Apple cleverly placed the iPhone at the center of an ecosystem primed for expansion and growth. Its small packages of handy software were named apps, and each was purpose-built to lessen the burden of a small piece of the user’s average daily workload. It became like having a digital servant in your pocket. iPhones were built to be indispensable and Apple knew it. The idea was a winner from the start. Ingeniously, most apps didn’t interact directly with other apps across the network, instead demanding that their users be in on the act. At once, that created a network of dependence between people, phones, software, and developers. Overnight, a race to the top had started, born out of nothing more than pure digital gluttony. It was like letting some techno-genie out of the bottle, and an almost perfect catalyst to speed up both technical adoption and social change.

    At that point, and under normal rules, mainstream science would have become interested. Experts would have consulted their textbooks and no doubt looked up various tables. But in the end, they’d all have agreed. This would have been something familiar to them. They’d have seen similar empirical evidence before and allowed the biologists, zoologists, and ecologists to push to the front. This had to be evolution, and everyone knew it. But the evolution of what, and how could the teaching of Charles Darwin be involved? Under all recognized definitions, evolution needs some form of life present to take place at all, and certainly at the rates being seen. The guidance demanded biology at the very least. That was the baseline, the God-given condition at any rate. Talk about evolution and you were discussing the very essence of life itself. That had to be impossible. How could technology so clearly change in this way? It was about copper wires and silicon chips, not flesh and blood.

    The establishment seemed confused. Something had to be wrong.

    But no; there was nothing wrong at all. It was just that science itself needed to refocus to see the obvious. Biology was there in the mix all along, but in an unfamiliar way. To the sociologists it sort of made sense, but they didn’t quite have a way to explain it. It wasn’t about flesh and blood, DNA, or genes per se. It was about ideas spreading from person to person, group to group. It was about technology acting as a spark for social change, and it needed a new branch of science, a sociotechnical science, to explain it. This was evolution taking place across bodies, not within them. Society was starting to apply technology to advance humanity. The ideas of Charles Darwin had just stepped up to planet-scale.

    Before long, the academics and engineers began to catch up, but those involved in the birth of modern sociotechnical science still often argue about its exact inception. Some refer to a specific conversation in a pub in Edinburgh in 2006,[1] while others believe that the germ of the idea came about in a meeting at an airport hotel in Boston, Massachusetts in 2005.[2] But whichever way, the right minds soon came together and through their collaborative insight things became clearer.

    In the end, all agreed that information transfer is at the very core of the evolutionary process itself. Earlier experts, like Richard Dawkins, had already said as much [22]. The fact that networks like the Web don’t look like the double helix of DNA is irrelevant. The Web is merely a collection of concepts, connections, languages, and protocols that don’t contain any notion of physical embodiment. Its deliberate separation from its internet infrastructure sees to that, and by such a disjoint it’s possible to argue that the Web is simply a huge, entangled map of connections.

    But does it really have the qualities needed for life and evolution?

    Don’t be fooled. Classical Darwinian theories might well be bound to the realities of biology, but more recently accepted ideas are not. In 1976, Dawkins presented a far more ethereal update to accepted thinking. In this, he outlined the idea of selfishness in genes [22], promoting the notion that they act only for themselves and therefore only replicate for their own good. He also introduced the important distinction between replicators and their vehicles. In the most obvious sense one might expect such replicators to be genes themselves, but that’s not necessarily always the case. It’s the information held within them that is more the real replicator, and the gene’s physical structure simply its carrier. It’s the information that’s important and not the mechanism used to support it. A replicator is therefore anything of which copies can be made, and that includes completely virtual capital like ideas, concepts, and even purely abstract information. A vehicle is therefore any entity that interacts with the environment to undertake or assist the copying process [23]. So, in a modern-day sociotechnical sense, any concepts embodied in Web-like content can be seen as replicators, and both human beings and software as their vehicles. Ideas on the wire are all that matters for sociotechnical evolution to take place.

    Genes are, of course, replicators in a more general sense; selfish replicators that drive evolution in the biological world. But Dawkins believed that there’s a more fundamental principle at work. He suggested that wherever it arises, anywhere in the universe, be that real or virtual, all life evolves by the differential survival of replicating entities[22]. This is the foundation for the idea of Universal Darwinism, which encompasses the application of Darwinian thinking beyond the confines of biological evolution.

    At the end of his book, The Selfish Gene, Dawkins asked an obvious, yet provocative, question: Are there any other replicators on our planet? The answer, he claimed, was yes. Staring us in the face, although still drifting clumsily about in its primeval soup of culture and technology, is another replicator — a unit of imitation [22].

    This is an important jump, as it raises evolution above any level needing physical contact. It thereby makes digitally connected networks an eminently suitable place for universal evolution. In his search for further explanation, however, Dawkins stumbled, needing a name for the concept he’d just invented; a word that conveyed the idea of a unit of cultural transmission. A unit of imitation. In the end he settled on the term meme from a suitable Greek root, a name chosen for its likeness to the very word gene itself.

    As examples of memes, Dawkins suggested tunes, catchphrases, clothes fashions, ways of making pots, or of building arches. He mentioned specific ideas that catch on and propagate around the world by jumping from brain to brain. He talked about fashions in dress or diet, and about ceremonies and technologies — all of which are spread by one person copying another. In truth, although he wouldn’t have realized it at the time, Dawkins had captured the very essence of the Web, a virtual memetic framework that would evolve just as all natural life had done before it.

    In a very short space, Dawkins laid down the foundations for understanding the evolution of memes. He discussed their spread by jumping from mind to mind and likened them to parasites infecting a host in much the same way that Web-like networks can be considered. He treated them as physically realized living structures and showed that mutually assisting memes will gang together in groups, just as genes do. Most importantly though, he treated the meme as a replicator in its own right and for the first time presented to the world an idea that would profoundly influence global society. Dawkins had predicted the future of the Web before it was even born.

    In this way, when one looks at economic history, as opposed to economic theory, technology is not really like a commodity at all. It’s much more like an evolving ecosystem. Innovation rarely happens in a vacuum. It’s usually made possible by other innovations already in place. A laser printer, for instance, is basically just a photocopier with a laser and a little computer circuitry added in to tell it where to etch on the copying drum for printing. So, such printers become possible when one has computer technology, laser technology, and photo-reproducing technology together. One technology simply builds upon the inherent capabilities of the others. In short, technologies quite literally form a richly connected network. Furthermore, such networks are highly dynamic. They can grow in a way that’s essentially organic, as when laser printers give rise to desktop publishing software and desktop publishing opens up a new niche for sophisticated graphics programs. There’s literally an evolutionary cascade taking place. As a result, technology-based networks can undergo bursts of spontaneous creativity and massive extinction events, just like biological ecosystems. Imagine a new technology, like the automobile, which outdates the horse as the primary mode of low-cost travel. Along with the popularity of the horse goes the primary need for blacksmiths, the Pony Express, the watering trough, the stables, and so on. Whole subnetworks of dependent technologies and support systems collapse in what the economist Joseph Schumpeter called a gale of destruction. But along with the automobile comes a new generation of change — paved roads, petrol stations, drive-by fast-food chains, motels, traffic police, and traffic lights. A whole new network of goods, services, and associated technologies begins to flourish, spawning ripples of evolutionary sequences elsewhere, each one filling a niche opened up by the redundant goods, services, and technologies that came before. The process of technological change is not just a mimic of natural eco-processes, it is the exact mechanical origin of life itself.

    Neoclassical economic theory assumes that systems like the economy are entirely dominated by negative feedback — the tendency for minor effects to die away and have no lasting impact on the wider environment. This propensity has traditionally been implicit in the economic doctrine of diminished returns [24]. But, as has been realized in more recent times, there’s also positive feedback, or increasing returns present — influences that bubble up and build on each other, leading to higher plateaus of stability. These not only help promote significant change like trends, but also help to explain the lively, rich, and spontaneous nature of many real-world systems.

    Technical Dependence

    Today, the network of digital dependence around us is almost beyond belief, being the close to instantaneous result of technology fusing with society through rapid evolutionary osmosis. Where once the Web and mobile technologies were separate, digital technologies like the iPhone saw them come together. Like opposing poles of two powerful magnets brought close, the attraction was simply overwhelming and something new snapped into place.

    As of 2014, there were almost as many cell phone subscriptions[3] as there are people on the planet,[4] and it took a little over 20 years for that to happen. In 2013, there were some 96 cell phone service subscriptions for every 100 people in the world. Shouting is likely the next-most widespread communications technique [25].

    Statistics specific to the US are equally astonishing in that, as of 2016, approximately 95% of all US citizens owned a mobile phone, most of which were smart phones. The split of phone types across a mix of demographics is also telling, showing that mobile phone use is just about evenly spread across gender and age groups. Furthermore, as of early 2018, there were close to 1,316,000,000 known websites online and 3,821,000,00 users with a dedicated internet connection — that’s over half the world’s population.

    Illustration

    Figure 1. Percentage Mobile Phone Use in the US (by Phone Type) [417]

    Illustration

    Figure 2. Percentage of US Adults who Own Mobile Phones (by Phone Type and Mixed Demographic)

    So, couple the fact that the Web is well on its way to absorbing significant portions of humanity’s joint knowledge, with the raw processing power that is now inherent to the internet, and a confluence the likes of which we have never experienced before is plain. As Gustavo Cardoso, Professor of Information and Communication Sciences at ISCTE, Lisbon, has been quoted as saying: We are in the presence of a new notion of space, where physical and virtual influence meet each other, laying the ground for the emergence of new forms of socialization, new lifestyles, and new forms of social organization[7].

    Illustration

    Figure 3. Number of Websites Worldwide[5]

    All this amounts to the creation of a planet-scale machine of which we are a part, an amalgam of the social and technical, capable of channeling information on demand to us all, no matter who we are, where we live, or when we need it. Historians have referred to this as a revolution. Not surprisingly they reached out to its driving force, named it the Information Revolution, and put it on par with other pivotal periods in history. By far the most famous of these is the Industrial Revolution, which saw a radical transformation from an agrarian and handicraft economy to one dominated by industry and machine-based manufacturing. It began in Britain in the 18th century and from there spread to other parts of the world. Although used earlier by French writers, the term Industrial Revolution was first popularized by the English economic historian Arnold Toynbee to describe Britain’s economic development from 1760 to 1840. Since Toynbee’s time, the term has been more broadly applied [26].

    Comparing the Industrial and Information Revolutions

    The Industrial Revolution brought about dramatic changes in nearly every aspect of British life, including demographics, politics, social structure, and the economy. With the growth of factories, vast numbers of people were drawn toward urban centers. As a result, the number of cities with a population over 20,000 in England and Wales grew from 12 in 1800 to nearly 200 by the close of the century [27].

    Technological change also led to the growth of capitalism, with factory owners and those who controlled the means of production rapidly becoming wealthy. They were the technologists of their day and by 1900, their arrival not only doubled the purchasing power in Great Britain, but saw the total national income increase by a factor of ten.

    This sweeping transformation further led to upheaval in the nation’s political structure, with agrarian landowners — a leftover from the previous agrarian revolution — being overtaken by industrial capitalists as leaders of the nation’s economy and power base. This nouveau riche sent shockwaves across Britain’s upper-class society, having gained their fortune rather than inheriting it. It was a battle backed by technology. Where once agricultural know-how and handed down privilege had ruled, it was rapidly overtaken by innovation in manufacturing and entrepreneurial spirit. As with all technical advances, the new relegated the old and forever changed its surroundings.

    Working conditions were often poor for those employed in the new factories. Workplaces could be overcrowded and unsafe. Men, women, and children were put to work at survival wages and in unhealthy and dangerous conditions. Ultimately, that led to new legislation to protect the rights of the poor and, over time, helped even out living standards. Workers themselves also helped by starting to protect their own interests, in particular by forming the first trade unions.

    Overall, the success of the technological change started in Britain became so profound that the country took over as the world’s leading power for more than a century.

    Jump forward only a few decades and we see the start of what would become the Information Revolution. In 1936, Alan Turing published his now famous paper, On Computable Numbers, with an Application to the Entscheidungsproblem, and presented a set of ideas in support of a universal computer that could perform any conceivable mathematical computation [28]. No one back then could have foretold the brush fire that would follow as the speed of technical innovation began to accelerate off its back.

    If one compares the changes brought about by the Information Revolution with those of the Industrial Revolution, then remarkable similarities can be found. Whereas people flocked to the cities to improve their circumstances in the 18th and 19th century, we now see similar clustering taking place in the virtual world, with nine of the Web’s top 100 websites being connected via social networking [29]. Facebook stands out, for instance, ranking as the Web’s third most popular website, after Google and YouTube™. As of late 2017, there were over 2.07 billion monthly active Facebook users worldwide, of whom, on average, 1.37 billion logged on daily. Overall, 84% of Facebook users regularly accessed the site via mobile phone [30]. Furthermore, Facebook ranked as the sixth most popular app on iPhone in 2017, eclipsed only by two other social networking tools in Snapchat™ and Instagram™ [31]. Instagram, a mainly mobile photo sharing network, reached 800 million monthly active users by September 2017 [32].

    Figures from 2019 are even more staggering, stating that there were approximately 3.48 billion people using social media at the time. Of those, 3.26 billion predominantly accessed via mobile phone [33]. To put that into context, that’s 94% of all active users or 43% of the entire world’s population.

    Not only does this make it overwhelmingly clear that we use the Web to communicate with those we know, want to know, or wish to influence, but we also use it to obtain information we see as being essential to our everyday existence. Of the top 100 websites ranked in April 2017, just under 25% were search engines [34], with Google being the most popular [35]. It now processes over 40,000 search queries per second on average, which translates into over 3.5 billion searches per day and 1.2 trillion searches per year worldwide [36]. As of January 2018, each individual search could reference at least 48 billion known webpages [37], but that excludes large areas of the Web that the engines can’t reach, so is almost certainly a gross underestimate. Some studies even suggest that as little as 4% of the information on the Web is detectable by search engines like Google or Yahoo [38]. This subset of the Web is known as the visible or surface Web for obvious reasons. The rest goes by the name of the dark or deep Web. Whatever it’s called, the volumes of information out there are mind-blowingly huge.

    Further similarities with the Industrial Revolution can be found with just a little more digging. For instance, five of the top ten wealthiest individuals in the world, as of 2017, were either technologists or ran companies associated with the transfer of information [39]. In July of that year, Amazon’s founder Jeff Bezos surpassed Bill Gates, the founder of Microsoft, as the richest person in the world, with an estimated net worth of $90.6 billion[6] [40]. That gives him more wealth than the individual annual Gross Domestic Product of each of the bottom 191 economies in the world. Or to put it another way, his wealth is on par with that of the whole of Slovakia or the Ukraine, according to figures from the International Monetary Fund [41].

    Today, the richest people in the world possess eye-watering levels of wealth, but even though recent history has seen many improvements in living standards, advances in technology have only served to accentuate economic and social polarization. In 1900, for instance, about 15% of the UK’s population lived in poverty; by 2000 that figure had climbed to 17%. Furthermore, although life expectancy in the UK increased by an average of 30 years over that period, thanks to improvements in standards of living and advances in medicine, changes in lifestyle, such as poor diet and declining levels of physical exercise, led to an increase in certain diseases [42]. Moreover, a 2014 study looking at employment figures in Silicon Valley, considered by many to be the technology capital of the world, concluded that the number of homeless people in that region provided one of its most visible signs of poverty, citing mid-range income in the area to be around $94,000 per annum — far above the US mid-point of $53,000. Yet about 31% of jobs in Silicon Valley paid $16 per hour or less that year — below what is needed to support a family in an area with notoriously expensive housing. The poverty rate in Santa Clara County, in the heart of Silicon Valley, for instance, was around 19% at that time, according to calculations that factor in the high cost of living [43].

    As for further similarities, again we see governments and philanthropists working to protect against the hazards of innovation. For example, 1973 saw the first introduction of data protection legislation in Sweden, with the UK following in 1984. Across the European Union, General Data Protection Regulation came into effect in May 2018, aiming to strengthen and unify data protection for all its citizens. Alongside, the United States now has around 20 sector-specific or medium-specific national privacy or data security laws, and hundreds of such laws among its 50 states and territories — California alone has over 25 state privacy and data security laws, for instance. Overall, as of 2018, information privacy or data protection laws that prohibit the disclosure or misuse of information about private individuals are in place in over 80 countries and independent territories around the world. This includes nearly every country in Europe and many in Latin America, the Caribbean, Asia, and Africa [44].

    Wrong

    Taking all this into account, it’s easy to understand why, according to some theories of Human Social Development at least, we’re now accelerating through the fourth epoch of technological advancement,[7] an age that’s seen information catapult the capabilities of modern-day humans far beyond those of our ancestors. Today, information is so pervasive and dominant that we’re absolutely dependent upon it. Its availability overwhelms us. Today, information is everywhere, right?

    Wrong. Oh, so very wrong.

    About that Cake

    To understand this strong objection, we need to be a little picky. Suggesting that information is everywhere today, inadvertently implies that it wasn’t yesterday or any time before that. It creates the impression that information has only just become important. As we’ll see, that simply isn’t the case.

    To explain why, we’ll need to consider four things. First is history, but we’ll come to that shortly, and the fact that common accounts can often be misleading. The next two are closely linked. They concern matters of scale, relating to the very large and the very small. And then there’s the cake, the Douglas Adams-like information cake introduced at the start of this chapter. Let’s not forget about that. To start with, let’s get straight to it and bring on the cake. Or, more precisely, let’s start by picturing a cake in our mind’s eye. A single unit of cake that is. Just a cake. Nothing more, nothing less. Let’s begin there.

    It doesn’t matter what type or size of cake, just as long as it’s round. Perfectly round cakes turn out to be the best, but anything approximating a round cake will do just fine.

    Now, imagine yourself slicing that cake into two perfectly equal halves. That done, imagine yourself doing it again so that both halves are split into quarters. Keep going. In fact, repeat until your knife can no longer cut through the thinnest slice before you. At that point, how many pieces of cake do you have? A large number, say, 32 at a guess? That might feel like a decent amount, but in the grand scheme of things 32 is actually tiny — close to irrelevant, in fact.

    So, let’s hypothetically replace your knife with one that’s magical and can just keep cutting, no matter how thin the cake slices get. Now try the experiment again and this time just keep going. After just a few attempts you’ll have cut your cake down into slices too thin to eat properly. A few more and it’s possible to imagine the knife struggling to cut through individual crumbs. Nevertheless, persevere. Eventually, you’ll get down to individual molecules in the cake’s makeup. Atoms will follow next, and so on. From there, imagine your knife slicing out subatomic particles, then particles within subatomic particles, until, at some arbitrary point, the idea of dissecting any further just becomes meaningless. At that scale we simply don’t have the language, let alone the understanding, to describe what we might have left.

    Now, again stop to consider the number of whatever units of cake you have left. It’s bound to be a big number and certainly somewhat larger than 32. But is that number so large that you might not be able to count all the cake fragments before you? And are those pieces so small that you couldn’t possibly count them anyway? Might we be talking about a number close to infinity?

    Infinity

    Before we get to the point of this experiment, it turns out that the great mathematician, scientist, and philosopher Galileo Galilei worried about a similar problem a long time ago. In his mind, he replaced our information cake with the idea of a pure circle. However, for our purposes, the two are completely interchangeable. Like us, he was interested in the business of slicing, or subdividing, repeatedly. But with Galileo, when his slices got infinitesimally small, he thought of drawing a bigger circle around the one he already had. That allowed him to trace his cut-lines out so that they dissected this newly drawn circle as well.

    Illustration

    Figure 4. Galileo’s Thought Experiment

    By doing so, Galileo realized that no matter how small the curved edge of the slices in his inner circle (distance a in Figure 4), the corresponding distance (distance b in Figure 4) in his outer circle would always be greater, thereby demonstrating that thinner slices would always still be possible. Likewise, if he kept on cutting until all the sections in his outer circle became infinitesimally small, all he had to do was add yet another circle outside his outermost circle to allow more dissection. This led him to realize that the task of cutting can never be complete, and that whenever his slice-count got close to infinity, there would always be more slices to go. Because of this, Galileo came to realize that the singular idea of infinity is essentially pointless. It’s mind-bendingly impossible to pin down. In actuality, there’s not just one infinity, but an infinite number of infinities — an idea later formalized by the mathematician Georg Cantor. In essence, Galileo had figured out that infinity is not a number at all, but rather a whole range of numbers stretching out in an unmeasurable and uncountable precession. Paradoxically, that leaves infinity as being both mathematically useful and useless at the same time, simply because it isn’t open to precise description. As a result, many mathematicians hate it as it’s one way to prove that the overall consistency of their profession is unobtainable.

    This, along with other similar frustrations, will become important as our story unfolds.

    Just One Drop in the Ocean

    There are two key reasons why our digression into infinity is important here. The first is about precision, and the second is scale.

    If you boil mathematics down to its bare bones, all you’ll find is a handful of axioms, concepts, practices, and symbols. Of the symbols, many might be familiar, like plus or minus, but by far the most important is the humble equals sign. We’re taught to use it early, being led to see it as an invitation to do work. With it, mathematicians apply their seal of approval by saying, trust me, we can figure this out. It declares that one side of an equation is exactly the same as the other. In even simpler terms, it tells us that one side can be transformed into the other without bits hanging over the sides or being left out. It signifies a process that provides precise change. In one word, it tells us that computation is possible. Hold that thought as it will become really important soon.

    Now, think back to our cake experiment and this time consider the cake as representing the entire universe. Once more, imagine slicing all the way down as far as it’s possible to go. This is where the interplay between physics, mathematics, and, as it turns out, computation and information becomes significant. And, more interestingly to our story perhaps, it’s at this point that the idea of infinity starts to cause havoc. This is where our information cake comes into its own.

    As it happens, some very successful and important physical theories rely heavily on the ideas of infinity and information. Einstein’s famous ideas on general and special relativity, for instance, need infinity to behave like a real number in order to describe the world around us. Time and time again, experiments have proved these theories to be accurate when applied properly. Other scientific theories, however, simply hate the thought of infinity and can’t function correctly in its presence. Quantum mechanics, in part, is one, with certain branches falling apart when the idea of infinity comes close. But quantum mechanics and relativity famously don’t get on, for now at least. Relativity is only comfortable when asked about the universe at massive scales, while quantum mechanics comes into its own when considering the subatomic. Because of this, the idea of infinity is one of the biggest blockers we have in science today, and, as we’ll see, it’s also closely linked to modern ideas on information. For that reason, both are becoming increasingly relevant in trying to achieve science’s ultimate aim. That is, to unify relativity with quantum mechanics and create a grand theory of everything — a single idea that can seamlessly link the innards of the atom to the infinite sea of galaxies above our heads. A scientific theory that will be so provable and unquestionably embracing that all ideas before it will be considered plain folly. In that one wish, science bares its soul and declares itself as much an artist as any Grand Master. It merely seeks to capture the universe’s beauty and distill it down so that we might truly understand. And the idea of information is at its very core. In truth, it always has been. Information has always been important and in a much, much more profound and fundamental way than we’re commonly familiar with today.

    Layering the Cake to Achieve Ultimate Unification

    Of all the routes that might bring unification to relativity and quantum mechanics, those with gravity pointing the way are, as we’ll see, the most promising. And, from the point of view of computation and information at least, one stands out in particular.

    Loop Quantum Gravity sees the tangible world around us as not being forged from space, time, atoms, and so on, but rather as an emergent property from a much more fundamental supporting framework. This is built from what are known as Covariant Quantum Fields [45], which are field-like structures not dissimilar to those found in magnetism. Nevertheless, unlike magnetic fields, covariant quantum fields don’t live in spacetime — the amalgamation of space and time at the center of Einstein’s relativity theories — but rather live, so to speak, one on top of the other; fields upon fields, just like the layers in an imaginary sponge cake. So, in a world according to Loop Quantum Gravity, the reality we perceive at our scale is just a blurred and approximate image of one of these quantum fields in the gravitational field [45].

    With that in mind, we’re now ready to question the role of computation and, specifically, information in all of this.

    The Universe as One Big Computer

    When considering the universe as a whole, it’s possible to think of it as being just one big computer. The largest computer that has ever and will ever exist, in fact. It takes in information in the form of matter and transforms it into whatever the reality is we experience around us. If you can accept that, then two rather simple, yet insightful, questions lie within reach:

    Question 1: If we could make time stand still, and stop it in a heartbeat, as it were, how much computation would be left in the universe?

    Forgetting for a moment that we’ve just suggested that time doesn’t exist, the answer is none of it. Computation, in this context, is simply another word for transformation, and transformation, all forms of transformation that is, must take place over time, not regardless of it. In other words, computation cannot exist without time and, likewise, because of its dependence on computation through the equals sign we mentioned earlier, nor can any form of meaningful mathematics.

    Now, ask yourself the second question:

    Question 2: If we could make time stand still, stop it in a heartbeat, as it were, how much information would be left in the universe?

    The answer is all of it, as information exists independent of any computation going on around it or taking place over it.

    This profoundly helps explain the value of information in science and highlights its

    Enjoying the preview?
    Page 1 of 1