Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Unsupervised: Navigating and Influencing a World Controlled by Powerful New Technologies
Unsupervised: Navigating and Influencing a World Controlled by Powerful New Technologies
Unsupervised: Navigating and Influencing a World Controlled by Powerful New Technologies
Ebook561 pages7 hours

Unsupervised: Navigating and Influencing a World Controlled by Powerful New Technologies

Rating: 0 out of 5 stars

()

Read preview

About this ebook

How a broad range of new immensely powerful technologies is disrupting and transforming every corner of our reality—and why you must act and adapt

Unsupervised: Navigating and Influencing a World Controlled by Powerful New Technologies examines the fast-emerging technologies and tools that are already starting to completely revolutionize our world. Beyond that, the book takes an in-depth look at how we have arrived at this dizzying point in our history, who holds the reins of these formidable technologies, mostly without any supervision. It explains why we as business leaders, entrepreneurs, academics, educators, lawmakers, investors or users and all responsible citizens must act now to influence and help oversee the future of a technological world. Quantum computing, artificial intelligence, blockchain, decentralization, virtual and augmented reality, and permanent connectivity are just a few of the technologies and trends considered, but the book delves much deeper, too. You’ll find a thorough analysis of energy and medical technologies, as well as cogent predictions for how new tech will redefine your work, your money, your entertainment, your transportation and your home and cities, and what you need to know to harness and prosper from these technologies.

Authors Daniel Doll-Steinberg and Stuart Leaf draw on their decades of building and implementing disruptive technologies, investing and deploying funds, and advising business leaders, governments and supranational bodies on change management, the future of work, innovation and disruption, education and the economy to consider how every area of our lives, society, economy and government will likely witness incredible changes in the coming decade. When we look just a bit further into the future, we can see that the task facing us is to completely reinvent life as we know it—work, resources, war, and even humanity itself will undergo redefinition, thanks to these new and emerging tools. In Unsupervised, you’ll consider what these redefinitions might look like, and how we as individuals, and part of society, can prevent powerful new technologies from falling into the wrong hands or be built to harm us.

  • Get a primer on the foundational technologies that are reshaping business, pleasure, and life as we know it
  • Learn about the lesser known, yet astonishing, technologies set to revolutionize medicine, agriculture, and beyond
  • Consider the potential impact of new tech across business sectors—and what it means for you
  • Gain the knowledge and inspiration you need to harness your own power and push the future in the direction of good for all of us not just the few
  • Explore the best ways to invest in the changes these technologies of the future will bring about

This is a remarkably thorough and comprehensive look at the future of technology and everything it touches. Shining a light on many unsupervised technologies and their unsupervised oligarchy of masters.

LanguageEnglish
PublisherWiley
Release dateJul 25, 2023
ISBN9781394209910

Related to Unsupervised

Related ebooks

Industries For You

View More

Related articles

Reviews for Unsupervised

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Unsupervised - Daniel Doll-Steinberg

    PART I

    Important Technologies You Cannot Ignore

    Twenty years from now you will be more disappointed by the things that you didn't do than by the ones you did do.

    —Mark Twain

    The Cognitive Revolution is happening. It is a period of exploration as bold and daring as the eras of great explorers in the past. It is being propelled by the rapidly accelerating and iterative cycle of human thought and invention. Humanity conceptualizes, builds, and uses, which leads to the next round of conceptualization, building, and use, and on and on and on…. The difference today is that much of the technology being created has the capability itself of continuing to propel the cycle independently—with or without direct human intervention. These technologies have evolved on the back of silicon chips and computer power, doubling every two years (an exponential increase). Yet as new materials and applications become fully integrated, these exponential increases will actually seem small. At the same time, our own cognitive ability is increasing linearly, if at all. The clear implication is the potential for computers to rapidly reach or even surpass many human capabilities, certainly on a skill-by-skill basis, as machines did vis-à-vis our physical abilities during the Industrial Revolution.

    Perhaps the hardest thing to fully comprehend and internalize today is that we are already beyond the period of mere imagination—vast numbers of practical applications already influence and, in some cases, dominate our daily lives. The technologies behind these changes are immensely varied and it would require an encyclopedic tome to encompass them all. The following, however, are the key building blocks to an untold number of far-reaching applications. And these technologies are built on huge data sets that existing and future technologies are capturing and will exploit:

    Collection and storage. As the amount of available data on virtually any topic is so enormous, only computers can now capture and store it all. And, once captured and stored, even if used properly today, data can be misused tomorrow, as data does not die. For example, massive amounts of information were collected, both transparently and otherwise, to minimize the dangers surrounding both 9/11 and COVID-19. Although some of this has certainly been helpful, as the immediate crises waned, vast amounts of data remain, much of which was previously assumed to be private—even sacrosanct. How will that data be used? How is it being used now? And which entities around the world have encrypted versions and are just waiting for the key? Unfortunately, ex ante, there is no way of predicting what becomes of all this sensitive information and what intrusions might result.

    Yet government and corporate collection of data is only a fraction of the total amount being created. Much (possibly most) of all accessible personal and professional information has been offered up voluntarily by individuals and organizations. It began with text, then photos and video, and now genetic information to trace ancestry. What next?

    Personal communications and daily online activity. Originally through an extensive use of emails and texts, then through a love affair (perhaps addiction) with emerging social media (Facebook, Twitter, Instagram, TikTok, etc.), and now through an endless array of ubiquitous applications (Siri, Alexa, Google, Amazon, etc.), an unimaginably vast trove of information is being generated that can be used in untold applications going forward. Most of those responsible for publicly (or privately) posting information, using search engines, or engaging in online transactions, have little sensitivity or even interest in how useful this data might be to others in the future and for what purposes it may be used.

    In the educational and professional world, technology has allowed for much simpler creation and sharing of information. The reasons are generally benign, but the rate of growth has been astounding. Many organizations, long before the pandemic, saw great value in group communication. COVID-19's evolution of work from home (WFH) massively accelerated this process, forcing corporations and individuals to find ways of being in effective contact and finding ways of managing processes and developing solutions in ways many had not fully explored before. Without a doubt, this has created vast amounts of recorded data that would not have existed if traditional communications through regular in-person meetings, classroom interactions, chats by the coffee machine, beers after work, etc., had not been (temporarily?) abandoned.

    And, while the amount of available data has already mushroomed, the powerful techniques of collecting even more and storing it have grown at an even greater rate.

    Management and use. Up until this point, humans have been the primary actor in this phase. Either we have manipulated and used the data ourselves or created algorithms to do so. We are, however, on the verge of a major shift of process/power, to where computers become able to manage data themselves and, initially through machine learning, take over some of the decisions on when and how the data is used (without any direct human intervention). Is this good or bad?

    We are bombarded with data. State and nonstate actors use arrays of bots to overwhelm us with contradictory and even fake data. Governments have entire divisions to use technology to collect and manipulate data to affect the behavior of both citizens and foreign nationals. The use of unsubstantiated or carefully selected data by questionable or self-anointed experts results in bad/biased analyses that clearly end in suboptimal actions and results. Would a more systematized approach run by computers with built-in checks and balances be a potentially better solution? In concept, perhaps, but circular logic brings us back to our current reality that behind every computer program is someone who needs to write the code or create the protocols to allow the system to generate future iterations. Yet the effective anonymity of these someones is astonishing, given the effective power they wield.

    Our goal with this book is not to be an encyclopedic resource on innovative and disruptive technologies, which is impossible, as they are moving so fast. Rather, we aim to shine a light on the development and application of technologies you have probably heard about that are accelerating exponentially and already impacting us in uncountable ways—ways that will possibly change our entire lives in a timeframe many will find surprisingly short. Our greatest concern is that, despite people having a general familiarity with many of these areas, extremely few are fully aware of the following:

    The breadth and scope of these technologies and their impacts.

    The extraordinary concentration of power in a few individuals around the world who, at present, operate with remarkably few constraints.

    To fully engage, it is useful to understand the underpinnings of these technologies, associated domains, and constructs.

    The first section is divided into four parts:

    Foundational technologies. The intelligent systems behind the technological revolution. These include artificial intelligence, quantum computing, and their communication systems.

    Enabling technologies. The tools that will allow the foundational technologies to rapidly deliver change. These include blockchain, decentralized autonomous organizations, tokenization, and cryptocurrencies.

    Consumer-facing hardware deployments of foundational technologies. Our focus is primarily on technologies already in use (or becoming general purpose). There are many areas that could change the world, but if the time frame is likely to be decades out, they are outside this scope. These include robotics, automation, Internet of Things, virtual reality, and augmented reality.

    Important uses of technologies you cannot ignore. There is probably no sector that does not use, directly or indirectly, at least one of the technologies described in the book. We have chosen three very different areas, two vital to the future of humanity (energy and healthcare) and one little focused on or understood but with significant potential ramifications (the metaverse).

    We see this as similar to a marathon. The fastest runners are lined up near the starting line and know what to expect; first-timers are at the back and can take significant time even to get to and cross the starting line. We just want everyone to be able to finish comfortably.

    CHAPTER 1

    Foundational Technologies

    Logic will get you from A to B. Imagination will take you everywhere.

    —Albert Einstein

    Artificial intelligence, quantum computing, and advanced always-on communications are early-stage technologies that are set to completely change the way we think, interact, and operate. They will lead to the complete re-imagining of the way we interact with technology and one another through the reinvention of access points—our devices. They will quickly be able to replicate human processes and improve them to allow us to solve problems we can only dream of today.

    Artificial Intelligence

    Creativity is intelligence having fun.

    —Albert Einstein

    Artificial intelligence represents either the underpinning or, at least, an important tool set for the rest of these technologies and in topics well beyond. An obvious first question is, what is artificial intelligence? A currently acceptable definition might be, by simulating human cognitive functions, the ability of a computer or robotic system not only to undertake tasks conducted by human beings, but to learn from the accomplishment of these tasks and improve their execution over time. And, over the last few years, there has been enormous progress in the ability to deliver on this. For instance, Google's new language model, PaLM, can explain why a joke is funny, a key characteristic of human's common sense and reasoning; OpenAI's latest system, ChatGPT, can have conversations and answer queries; and, DALL·E 2 can create a photorealistic image from a text prompt such as draw a cat driving a car. A vast number of programs and apps use AI.

    What makes a functional definition of artificial intelligence so hard is the necessary combination of several technologies that create and develop computers simulating human levels of intelligence. In the beginning, it referred to technologies that could pass the Turing test—i.e., whether a computer, in a conversation with a human, can avoid having that person realize it is not human. Although this definition remains relevant today, using a range of tools, products, and services (e.g., machine learning and natural language processing) AI has expanded to include many other features of our brains.

    AI breaks down into three principal categories:

    Weak or narrow AI. This is the most common and is designed to tackle a single/specific problem. Some very widespread apps like Alexa and Siri are based on this.

    Strong or artificial general intelligence (AGI). AGI is the theoretical ability to perform, comprehend, and learn any human intellectual task and to be able to continue to learn contextually in a way similar to the human mind.

    Artificial super intelligence (ASI). While still theoretical, ASI is the ability to surpass the human mind and, as posited by Stuart Russell and Peter Norvig, be able to think and act not like humans but rationally (however, that might be defined). Too many dystopian futures (The Terminator, The Matrix, etc.) come to mind to make this outcome seem in any way appealing.

    There are currently two primary theoretical approaches to AI: deterministic and stochastic.

    The deterministic approach always gives the same outcome given the same input, with the underlying AI always following the same sequence. At this stage of evolution, the deterministic algorithms are the most practical as they are binary, which enables them to be run efficiently on standard computers; however, the level of complexity they can handle (although increasing rapidly) is currently limited.

    Stochastic models are probabilistic processes that simultaneously analyze many pathways, with no completely certain, predetermined outcome. The term stochastic refers to random patterns that cannot be predicted precisely but can be analyzed statistically. Stochastic algorithms are therefore useful for problems with vast amounts of data, which may include hidden or incomplete data. Today, there are two main categories of stochastic AI development: symbolic and connectionist.

    Sometimes referred to as the first wave, symbolic AI is an approach that evolves the same practical way the human brain learns—where symbols are used to represent our world and play a vital role in our thought and reasoning processes.

    The second wave is connectionist AI, which is an approach developed from attempts to understand how the human brain works at the neural level and, in particular, how people learn and remember. It is sometimes referred to as neuron-like computing.

    Stochastic modeling usually requires computers able to handle massive data sets; today, these are very powerful supercomputers. The future will be in the hands (actually, circuitry) of quantum computers. As intelligent computers learn to improve their probabilities of achieving desired outcomes—through recognizing and reacting to their environments—they could be reasonably be considered aware. Clearly this opens enormous opportunities to benefit humanity; but it also creates enormous threats. As AI systems evolve independent cognitive abilities, they are also likely to develop independent agendas. To the extent those agendas do not parallel human goals, dystopian images come to mind. It is particularly scary to think that humans (even those programming and using AI output) are generally oblivious to the huge advances and power computers are gaining. Often referred to as the AI effect, it is the situation where machines and/or programs evolve new cognitive AI skills, but these advances are diminished and classified merely as machine learning; or as Tesler's Theorem states, AI is whatever hasn't been done yet.¹ Our parallel theorem is the ostrich response: Just because you ignore a situation doesn't mean it isn't happening.

    Our introduction briefly touched on ChatGPT's OpenAI platform. Even in its current iteration, it is rapidly becoming part of our collective way of solving problems. However, there are many ways the developers of generative pre-trained transformer (GPT)–based models will improve them:

    Training the models on larger and more diverse data sets, allowing them to deal with more comprehensive language patterns and better handle diverse topics and writing styles.

    Integrating external sources of information, such as internal corporate and external databases, to enable the models to provide more detailed and accurate answers to certain types of questions.

    Incorporating additional user-specific information, such as context or background knowledge, enabling the model to generate more accurate and relevant responses to user requests.

    Using techniques such as transfer learning or fine-tuning to pre-train the models on specific tasks or domains, thereby allowing them to perform better on specific use cases, such as customer service.

    Improving the models’ ability to handle dialogue or multiturn conversation, by considering the context and history of the conversation.

    Individuals and organizations using the underlying technology of these AI foundation models, while customizing them by adding their own data and learning, will achieve a greater range of specifically targeted outcomes.

    It is worth noting that the field of natural language processing is evolving at breakneck speed and new methods are being developed all the time that will be (are being) used to evolve better models. The launch and widespread adoption of ChatGPT and the general public zeitgeist will only accelerate this.

    Potential existential threats aside, different applications of AI impact many aspects of day-to-day life, some positive, some negative, and many a bit of both. A serious battle is underway to determine which functions should be performed by humans and which ones are better automated. Goldman Sachs has already suggested ChatGPT will eliminate as many as 300 million jobs. There is a common misconception that for AI to replace a human role, it needs to look, feel, and function like a human being or mind. This is patently false. Just observe most robots attempting to match human motor skills—walking, catching balls, or playing sports, let alone interacting seamlessly with us; they are clumsy and often very funny indeed. This doesn't mean, as the Industrial Revolution evidently proved, matching or bettering a specific skill wasn't enough to replace a large part of the labor force in the fields and factories. This movement continues, soon to be supplemented in the driver's seat, cockpit, trading room, and operating theater—each job eliminated simply by accomplishing one skillset better or more cost-effectively than we humans can.

    AI has already had an enormous set of stealth impacts on us in most areas of our lives—security, medical, logistics, entertainment, transportation, social media, and information delivery, etc.; yet we are still at the incipient stage. As computing power grows exponentially (or more) and applications become increasingly sophisticated, we will all become more aware of the implications. But, at that stage, influencing the outcome might no longer be feasible; only the way it impacts us will be. AI can be our friend, but it also poses significant short- and longer-term risks. As ancient philosopher Laozi once said, There is no greater danger than underestimating your opponent, and this is an opponent we ourselves are

    Enjoying the preview?
    Page 1 of 1