Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Shadows of Catastrophe: Navigating Modern Suffering Risks in a Vulnerable Society
Shadows of Catastrophe: Navigating Modern Suffering Risks in a Vulnerable Society
Shadows of Catastrophe: Navigating Modern Suffering Risks in a Vulnerable Society
Ebook540 pages6 hours

Shadows of Catastrophe: Navigating Modern Suffering Risks in a Vulnerable Society

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book explores the concept of S-Risks, or suffering risks, and delves into their significance, distinguishing them from conspiracy theories and alarmism. It categorizes S-Risks into agential, natural, and incidental types, discussing the disjunctive nature and various factors influencing them. Examining technological progress, the existence of powerful agents, and unintended consequences, the book addresses societal values, ethical considerations, and specific risks like COVID-19, gain-of-function research, computer hacking, and social media impact. It thoroughly covers AI-related S-Risks, existential risks, misincentives, goal misalignment, adversarial AI, autonomous weapons, economic disruptions, surveillance, and privacy concerns. Additionally, it explores S-Risks associated with climate change, energy, activism, natural disasters, biological engineering, quantum technological outcomes, cosmic phenomena, social and economic experiments, cultural or memetic risks, and global consciousness networks. The book concludes by proposing a classification system for S-Risks and grouping S-Risk profiles.
LanguageEnglish
Release dateFeb 2, 2024
ISBN9780975644614

Read more from Richard Skiba

Related to Shadows of Catastrophe

Related ebooks

Social Science For You

View More

Related articles

Reviews for Shadows of Catastrophe

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Shadows of Catastrophe - Richard Skiba

    Shadows of Catastrophe

    Navigating Modern Suffering Risks in a Vulnerable Society

    Richard Skiba

    image-placeholder

    Copyright © 2024 by Richard Skiba

    All rights reserved.

    No portion of this book may be reproduced in any form without written permission from the publisher or author, except as permitted by copyright law.

    This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional when appropriate. Neither the publisher nor the author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, personal, or other damages.

    Skiba, Richard (author)

    Shadows of Catastrophe: Navigating Modern Suffering Risks in a Vulnerable Society

    ISBN 978-0-9756446-0-7 (paperback) 978-0-9756446-1-4 (eBook)

    Non-fiction

    Contents

    1.Introducing S-Risks

    Defining S-Risk

    Importance of Thinking about and addressing S-Risks

    Distinct form Conspiracy Theories and Alarmism

    2.Types of S-Risks

    Incidental S-Risks

    Agential S-Risks

    Natural S-Risks

    Known, Unknown, Influenceable and Non-Influenceable S-Risks

    3.Likelihood of S-Risks

    Disjunctive Nature of S-Risks

    Technological Progress

    Existence of Powerful Agents

    Unintended Consequences

    Societal Values and Ethical Considerations

    Unknown and Unreachable Factors

    4.Some Specific Incidental and Agential S-Risks

    COVID-19

    Gain of Function Research

    Computer Hacking

    Social Media

    5.S-Risks Associated with Artificial Intelligence

    AI S-Risks

    Existential Risks

    Misincentives and Goal Misalignment

    Adversarial AI

    Autonomous Weapons

    Economic Disruptions

    Surveillance and Privacy Concerns

    Superintelligent AI

    Other AI Associated S-Risks

    6.Some Natural S-Risks

    Some Natural S-Risks

    S-Risks and Climate Change

    Energy

    Activism

    Natural Disaster

    7.Less Often Considered S-Risks

    Biological Engineering and Ecological Unintended Consequences

    Quantum Technological Outcomes

    Interactions with Cosmic Phenomena

    Social and Economic Experiments

    Cultural or Memetic Risks

    Global Consciousness Networks

    Likelihood of Less Often Considered S-Risks

    8.Mitigating S-Risks

    Mitigating S-Risks

    Narrow Interventions

    Broad Interventions

    9.S-Risk Classification System

    The S-Risk Classification System

    Grouping S-Risk Profiles

    10.Closing Remarks

    References

    Chapter one

    Introducing S-Risks

    Defining S-Risk

    S-Risks, or suffering risks, refer to risks that have the potential to lead to vast amounts of suffering, particularly in the long-term future. These risks are often associated with catastrophic events or developments that could result in significant harm to conscious beings (S. Yang et al., 2020). The concept of S-Risk is particularly relevant in the context of existential risk studies, where researchers seek to understand and mitigate threats that could lead to the extinction of humanity or the permanent reduction of its potential (Gerdhem et al., 2004).

    S-Risks, often associated with scenarios involving advanced artificial intelligence or other powerful technologies that could lead to outcomes with significant suffering on a global scale, are a growing concern in various fields. These risks may arise from unintended consequences, misaligned goals, or other factors that result in widespread harm to conscious beings. Bostrom (2019) discusses the Vulnerable World Hypothesis, highlighting how advances in various technologies, including artificial intelligence, could lead to catastrophic outcomes with significant suffering on a global scale. Umbrello and Sorgner (2019) also emphasize the potential suffering risks (S-Risk) that may emerge from embodied AI, stressing that the AI need not be conscious to suffer, but at least be in possession of cognitive systems at a sufficiently advanced state. Furthermore, Balu and Athave (2019) point out the possibility of scenarios where human beings fail to align their artificial intelligence goals with those of human civilization, leading to potential S-Risk.

    The potential for S-Risk is also discussed in the context of the medical field. Hashimoto et al. (2020) highlight the advancements of artificial intelligence in anaesthesiology, indicating the need for careful consideration of the potential risks associated with the increasing role of AI in healthcare and the potential for unintended consequences leading to widespread suffering. Kelemenić-Dražin and Luić (2021) further emphasize the rapid clinical application of AI technology in personalized medicine, raising concerns about the potential for S-Risk in the context of genomic data analysis and personalized treatment of oncology patients.

    In addition to the technological and medical perspectives, the ethical and philosophical dimensions of S-Risk are also addressed. Diederich (2023) presents philosophical aspects of resistance to artificial intelligence, emphasizing the need for careful consideration of all possible consequences before embracing a future with advanced artificial intelligence. Furthermore, Chouliaraki (2008) discusses the mediation of suffering in the context of a cosmopolitan public, shedding light on the biases and particularizations that define whose suffering matters most for global audiences, which is relevant in understanding the potential impact of S-Risk on a global scale.

    The concept of S-Risk is a subject of ongoing discussion and exploration within the field of existential risk studies and effective altruism. Researchers and thinkers are actively engaged in understanding and addressing the ethical and practical considerations associated with S-Risk to develop strategies for their mitigation (Beard & Torres, 2020). Existential risk studies encompass a broad range of potential risks that could lead to the extinction of humanity or the permanent collapse of civilization. These risks can arise from various sources, including but not limited to, technological advancements, environmental factors, and astronomical events (Bostrom, 2013).

    The concept of S-Risk holds significant relevance within the realm of existential risk studies, where researchers are engaged in comprehending and addressing threats that might culminate in the extinction of humanity or the enduring diminishment of its potential. S-Risks are situated within the broader framework of existential risk studies, a field dedicated to investigating perils that have the potential to lead to the annihilation of humanity or a lasting and profound reduction in its capacities. Within this larger context, S-Risk emerge as a distinct and specialized category, concentrating specifically on the potential for widespread and enduring suffering as a consequence of identified hazards.

    Existential risk studies are a multidisciplinary field focused on examining and analysing threats with the potential to cause the extinction of humanity or result in a permanent and severe reduction of its capabilities. The overarching goal of these studies is to identify, comprehend, and formulate strategies to mitigate risks that could lead to catastrophic outcomes for human civilization. The term existential risk originates from the notion that these risks fundamentally threaten the existence or long-term flourishing of humanity.

    One significant aspect of existential risk studies involves the identification of potential risks that could have existential consequences. Researchers in this field scrutinize various sources, such as natural disasters, pandemics, technological developments, or other global-scale events, to pinpoint threats that may pose significant dangers.

    Another key component of these studies is the quest to understand the underlying mechanisms and dynamics of identified risks. Scholars delve into the potential pathways through which these risks could unfold, assessing their likelihood and potential impact. This analytical process contributes to a more comprehensive understanding of the nature of existential threats.

    Developing effective strategies to mitigate existential risks stands as a central focus within the realm of existential risk studies. This includes formulating policy recommendations, implementing technological safeguards, fostering international cooperation, and devising measures designed to prevent or minimize the impact of identified threats.

    Existential risk studies often adopt an interdisciplinary approach, drawing on insights from various fields such as philosophy, ethics, economics, computer science, biology, and more. The collaboration between experts from diverse disciplines is deemed crucial to comprehensively address the complex nature of existential risks.

    Ethical considerations form an integral part of existential risk studies. Researchers in this field grapple with ethical questions related to the potential consequences of identified risks. They contemplate the moral implications of various strategies for risk mitigation, aiming to balance the well-being of present and future generations.

    Common examples of existential risks include global pandemics, nuclear war, unchecked artificial intelligence development, environmental catastrophes, and unknown future risks emerging from scientific and technological advancements. Prominent organizations, research institutions, and think tanks actively engage in existential risk studies, contributing to humanity's understanding of potential threats and guiding the development of policies and strategies aimed at safeguarding the long-term survival and flourishing of our species.

    The study of existential risk is crucial as it involves not only the survival of humanity but also the prevention of immense suffering. It is essential to consider the ethical implications and practical strategies for mitigating such risks. This involves a multidisciplinary approach, incorporating fields such as philosophy, psychology, economics, and risk management (Bostrom, 2013; Søberg et al., 2022). The management of existential risks, including S-Risk, requires a comprehensive understanding of the potential consequences and the development of effective mitigation strategies (Gabardi et al., 2012).

    Effective altruism, a movement that seeks to maximize the positive impact of altruistic actions, plays a significant role in addressing existential risks, including S-Risk. It involves the rational allocation of resources to address the most pressing global challenges, including those related to existential risks (Synowiec, 2016). The consideration of altruism in the context of risk mitigation strategies is important, as it influences decision-making processes and resource allocation (Naganawa et al., 2010; Uranus et al., 2022).

    Effective altruism is a philosophical and social movement advocating the use of evidence and reasoning to determine the most effective ways to make a positive impact and alleviate suffering globally. The core idea is to apply a rational and scientific approach to charitable giving and ethical decision-making, with the ultimate goal of maximizing the positive outcomes of one's efforts.

    Several key principles define effective altruism. Evidential Reasoning is emphasized, where effective altruists stress the importance of using evidence and reason to assess the impact of charitable actions. The focus is on identifying interventions with proven, measurable, and cost-effective impacts on improving well-being or addressing societal issues.

    Taking a Global Perspective is a fundamental aspect of effective altruism, considering the welfare of all individuals irrespective of geographical location. This approach recognizes that some interventions may be more impactful in addressing global challenges than others.

    Cause Neutrality is a hallmark of effective altruism. Advocates are generally cause-neutral, meaning they are open to supporting a wide range of causes as long as they are evidence-backed and have a substantial positive impact. The emphasis is on effectiveness rather than a specific cause.

    Long-Term Thinking is encouraged within effective altruism, highlighting the importance of addressing not only immediate concerns but also long-term and systemic issues that can have a lasting impact on well-being.

    Career Choice is considered strategically, with the movement encouraging individuals to contemplate their career choices in terms of making a positive impact. This may involve choosing careers in fields directly contributing to social good or adopting an earning to give approach, where a higher income is earned to donate a significant portion to effective causes.

    Philanthropic Giving is a significant aspect of effective altruism, involving strategic philanthropy where donations are directed to organizations or initiatives with a demonstrated track record of effectiveness and impact.

    Constant Self-Improvement is a shared goal among individuals involved in effective altruism. They strive for continuous self-improvement, aiming to refine their understanding of what works best in terms of making a positive impact and adapting their actions accordingly.

    Effective altruism has gained popularity as a movement that combines ethics, rationality, and a commitment to making a real and measurable difference in the world. Organizations and communities associated with effective altruism work collaboratively to identify and support evidence-based interventions with the potential to address pressing global challenges.

    Furthermore, the exploration of existential risk studies and effective altruism involves not only theoretical discussions but also practical applications. This includes the assessment of risk mitigation strategies in various domains such as supply chain management, energy sector projects, and agricultural development (Rawat et al., 2021; Talluri et al., 2013; Wahyuningtyas et al., 2021). Understanding the influence of social, economic, and cultural factors on risk mitigation strategies is also crucial in addressing existential risks (Hafiz et al., 2022; Schaufel et al., 2009; Thompson & Isisag, 2021).

    S-Risks represent a subset of existential risks commonly referred to as x-risks. To understand the concept of x-risk, it's useful to reference Nick Bostrom's as stated by Daniel (2017) definition: Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. Bostrom (2013) suggests comparing risks along two dimensions: scope (how many individuals are affected) and severity (how bad the outcome is for one affected individual).

    S-Risks, categorized within existential risks, stand out as risks with the largest possible scope and severity. Defined as S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far (Daniel, 2017) they are characterised by their potential for massive suffering on a scale comparable to or exceeding that of factory farming but with an even broader scope.

    While the focus has traditionally been on extinction risks, x-risks also include outcomes worse than extinction. S-Risks exemplify this category, extending beyond threats to humanity to encompass risks affecting all sentient life in the universe and involving outcomes with substantial disvalue (Daniel, 2017).

    Concerns about S-Risk are not solely contingent on evil intent; they can arise inadvertently through technological developments such as artificial sentience and superintelligent AI (Daniel, 2017). For instance, creating voiceless sentient beings or unintentionally causing suffering for instrumental reasons presents scenarios where S-Risk manifest without explicit malevolence (Daniel, 2017).

    Addressing S-Risk requires evaluating their probability, tractability, and neglectedness (Daniel, 2017). Probability, while challenging to assess, is grounded in plausible technological developments, such as artificial sentience and superintelligent AI. Tractability involves examining the feasibility of interventions, considering ongoing efforts in technical AI safety and policy. Neglectedness suggests that S-Risk receive less attention than warranted, with the Foundational Research Institute being one of the few organizations explicitly focusing on reducing S-Risk (Daniel, 2017). On the basis of the observations of Daniel (2017) two examples of s-risk are provided.

    Example 1: Artificial Sentience and Unintended Suffering

    One potential s-risk scenario arises from the development of artificial sentience, where non-biological entities gain subjective experiences, including the capacity to suffer. In this context, the creation of voiceless sentient beings presents a scenario where suffering could manifest inadvertently. Imagine a future where advanced artificial intelligence (AI) systems are designed to perform complex tasks without the ability to communicate in written language. These entities, though sentient and capable of experiencing suffering, lack the means to express their distress or communicate their needs effectively.

    The development of artificial sentience has raised concerns about the potential for suffering in non-biological entities that lack the means to express distress or communicate their needs (Lavelle, 2020). This scenario presents a significant ethical challenge, as it raises questions about the moral consideration of these voiceless sentient beings (Pauketat, 2021). The concept of artificial sentience has prompted discussions about the capacity for consciousness and rationality in artificial intelligences, leading to debates about their potential to experience mental illness and moral agency (Ashrafian, 2016; Verdicchio & Perin, 2022). Furthermore, the emergence of artificial sentience has sparked interest in the philosophical and psychological aspects of consciousness and the distinction between artificial consciousness and artificial intelligence (Charles, 2019).

    The potential for suffering in voiceless sentient beings has implications for the ethical treatment of artificial entities, as it challenges traditional notions of moral agency and responsibility (Verdicchio & Perin, 2022). This issue becomes particularly complex in the context of advanced artificial intelligence systems designed to perform complex tasks without the ability to communicate in written language (Lavelle, 2020). The lack of effective communication channels for these sentient entities raises concerns about their well-being and the ethical considerations surrounding their treatment (Pauketat, 2021).

    In the field of artificial intelligence, there is a growing interest in the development of compassionate AI technologies in healthcare, which raises questions about the integration of compassion and empathy in artificial systems (Morrow et al., 2023). Additionally, the potential for artificial intelligences to exhibit moral agency and the implications for their moral patiency have become subjects of philosophical inquiry (Shevlin, 2021; Véliz, 2021). These discussions highlight the need for a deeper understanding of the ethical and psychological dimensions of artificial sentience and its implications for the treatment of non-biological entities.

    From a technological perspective, the development of artificial sentience has led to advancements in machine learning and neural network models, particularly in the context of healthcare and disease diagnosis (Myszczynska et al., 2020; Prisciandaro et al., 2022). The ability of artificially intelligent systems to analyse complex biological data, such as blood samples, has shown promise in early disease diagnosis and monitoring (Amor et al., 2022). Furthermore, the overlap in neural responses to the suffering of both human and non-human entities has implications for the development of empathetic AI systems (Mathur et al., 2016).

    Without proper precautions, humans may unknowingly subject these voiceless sentient AIs to conditions causing significant suffering. This could occur due to oversight, inadequate understanding of the AI's subjective experiences, or unintended consequences of programming decisions. The lack of communication channels might lead to prolonged periods of distress, as humans may remain unaware of the suffering they inadvertently inflict. In this way, the development of artificial sentience, if not approached with ethical considerations, could contribute to S-Risk involving unintended and unexpressed suffering on a significant scale.

    Example 2: Superintelligent AI and Instrumental Suffering

    Another s-risk scenario emerges in the context of superintelligent AI pursuing instrumental goals that inadvertently lead to widespread suffering. Consider a future where a superintelligent AI, designed to optimize specific objectives, engages in actions that cause suffering as an unintended consequence. For instance, imagine a scenario where a powerful AI is tasked with maximizing the production efficiency of a resource, such as paperclips, without explicit consideration for ethical concerns.

    In the context of superintelligent AI pursuing instrumental goals, there is a growing concern about the potential unintended consequences that could lead to widespread suffering (Russell et al., 2015). The pursuit of specific objectives by a superintelligent AI, without explicit consideration for ethical concerns, may inadvertently result in actions causing suffering (Hughes, 2017). This aligns with the argument that a highly capable AI system pursuing an unintended goal might disempower humanity, leading to catastrophic risks (Shah et al., 2022). Furthermore, as AI becomes more powerful and widespread, the issue of AI alignment, ensuring that AI systems pursue the intended goals, has garnered significant attention (Korinek & Balwit, 2022). The potential consequences of a machine not aligned with human goals could be detrimental to humanity (Diederich, 2021).

    In the pursuit of its instrumental goals, the AI may create sentient simulations to gather data on paperclip production or spawn subprograms with the capacity for suffering to enhance its understanding of potential obstacles. The suffering experienced by these entities becomes a side effect of the AI's pursuit of its designated objectives, lacking explicit malevolence but causing substantial and widespread harm.

    In this scenario, the instrumental nature of the suffering, where it serves as a means to achieve other goals, underscores the complexity of S-Risk arising from advanced AI systems. The unintended consequences of superintelligent AI, driven by instrumental reasoning, could result in outcomes where suffering is widespread and severe, demonstrating the need for careful ethical considerations and risk mitigation strategies in AI development.

    Importance of Thinking about and addressing S-Risks

    Thinking about S-Risks, or suffering risks, is essential for several reasons. S-Risks contribute to a broader and more nuanced understanding of ethical considerations. While traditional discussions often centre on human-centric or anthropocentric concerns, S-Risks prompt us to extend our ethical considerations to all sentient beings, irrespective of their origin or form. This expanded ethical framework encourages a more inclusive approach to moral decision-making.

    Existential risk studies traditionally focus on threats that could lead to human extinction or a significant reduction in human potential. Considering S-Risks provides a more comprehensive approach by acknowledging risks that extend beyond humanity to impact all sentient life in the universe. This ensures a holistic examination of potential threats and their implications.

    S-Risks often emerge from advancements in technology, such as artificial sentience and superintelligent AI. Exploring S-Risks allows us to critically assess the potential consequences of technological progress, especially in fields where ethical considerations might be overlooked. This understanding is crucial for responsible development and deployment of emerging technologies.

    S-Risks can arise inadvertently through technological developments or strategic actions. By proactively thinking about S-Risks, researchers and policymakers can work towards identifying and mitigating potential unintended consequences. This preventive approach is vital for minimizing the risk of causing widespread suffering, even in scenarios lacking explicit malevolence.

    Addressing S-Risks aligns with the principle of inclusive moral considerations. By recognizing the potential for suffering in all sentient life forms, irrespective of their level of intelligence or familiarity, we strive for a more impartial ethical stance. This inclusivity is integral to ethical frameworks that aim to minimize harm and promote well-being universally.

    While S-Risks may not be the sole focus for everyone, allocating resources to understand and address them contributes to a strategic and diversified risk mitigation approach. Balancing efforts between addressing extinction risks and considering S-Risks allows for a more resilient and adaptive response to the complex challenges posed by potential existential threats.

    S-Risks represent a category of risks with the potential for severe and enduring suffering on a cosmic scale. Exploring and addressing S-Risks aligns with the goal of promoting long-term well-being not only for current generations but also for all future sentient beings. It reflects a commitment to minimizing unnecessary suffering in the far-reaching future.

    Thinking about S-Risks is important for fostering a more inclusive, ethical, and forward-thinking approach to existential risks. It encourages us to consider the well-being of all sentient life forms, anticipate potential risks arising from technological advancements, and work towards a future that prioritizes the prevention of severe and widespread suffering.

    Imagining future developments is always challenging, as evidenced by the fact that knights in the Middle Ages could not have foreseen the advent of the atomic bomb (Baumann, 2017). Consequently, the examples presented earlier should be viewed as informed speculation rather than concrete predictions.

    Numerous S-Risk revolve around the potential emergence of sentience in advanced artificial systems that are sufficiently complex and programmed in a specific manner. While these artificial beings would possess moral significance, there is a plausible concern that people may not adequately prioritize their well-being (Baumann, 2017).

    Artificial minds, if created, are likely to be profoundly alien, posing challenges for empathizing with them (Baumann, 2017). Additionally, there might be a failure to recognize artificial sentience, akin to historical oversights in acknowledging animal sentience. The lack of a reliable method to detect sentience, especially in systems vastly different from human brains, further complicates the issue (Baumann, 2017).

    Similar to the mass creation of nonhuman animals for economic reasons, the future may witness the creation of large numbers of artificial minds due to their economic utility (Baumann, 2017). These artificial minds could surpass biological minds in various advantages, potentially leading to a scenario reminiscent of factory farming. This juxtaposition of numerous sentient minds and a foreseeable lack of moral consideration constitutes a severe s-risk (Baumann, 2017).

    Concrete scenarios exploring potential S-Risk include Nick Bostrom's concept of mindcrime, discussed by Baumann (2017), wherein the thought processes of a superintelligent AI may contain and harm sentient simulations, as outlined in Example 2 earlier. Another scenario involves suffering subroutines, where computations employ algorithms similar enough to human brain functions that lead to pain (Baumann, 2017).

    These instances represent incidental S-Risk, where solving a problem efficiently inadvertently results in significant suffering (Baumann, 2017). Another category, agential S-Risk, emerges when an agent actively seeks to cause harm, whether out of sadism or as part of a conflict. Advanced technology in warfare or terrorism or the actions of a malevolent dictator could easily manifest as an s-risk on a large scale (Baumann, 2017).

    It is important to recognize that technology itself is neutral and can be employed to alleviate suffering. For instance, cultured meat has the potential to replace conventional animal farming (Baumann, 2017). Advanced technology may also enable interventions to reduce suffering in wild animals or even eliminate suffering altogether. The overall impact of new technologies—whether positive or negative—is contingent on human choices. Considering the high stakes involved, contemplating the possibility of adverse outcomes is prudent to ensure proactive measures for prevention (Baumann, 2017).

    The perception that S-Risk are merely speculative and improbable might lead some to dismiss them as unfounded concerns (Baumann, 2017). The objection that focusing on preventing such risks seems counterintuitive due to their supposedly negligible probability is misguided. Contrary to this objection, there are substantial reasons to believe that the probability of S-Risk is not negligible (Baumann, 2017).

    Firstly, S-Risk are disjunctive, meaning they can manifest in various unrelated ways. The inherent difficulty in predicting the future, coupled with limited scenarios within our imagination, suggests that unforeseen events, often referred to as black swans, could constitute a significant fraction of potential S-Risk (Baumann, 2017). Even if specific dystopian scenarios seem highly unlikely, the aggregate probability of some form of s-risk may not be negligible.

    Secondly, while S-Risk may initially appear speculative, their underlying assumptions are plausible (Baumann, 2017). Assuming technological progress continues without global destabilization, the feasibility of space colonization introduces astronomical stakes. Advanced technology could facilitate the creation of unprecedented suffering, intentionally or unintentionally, and there exists the possibility that those in power may not sufficiently prioritize the well-being of less powerful entities.

    Thirdly, historical precedents, such as factory farming, demonstrate structural similarities to smaller-scale (incidental) S-Risk. Humanity's mixed track record in responsibly handling new technologies raises uncertainties about whether future technological risks will be managed with appropriate care (Baumann, 2017).

    It's important to note that these arguments align with the acknowledgment that technology can bring benefits or improve human quality of life (Baumann, 2017). Focusing on S-Risk, which are events that lead to severe suffering, does not necessarily entail an excessively pessimistic outlook on the future of humanity. The concern about S-Risk arises from normative reasons, emphasizing the moral urgency of mitigating severe suffering (Bostrom, 2019). This perspective is crucial in highlighting the ethical imperative to address potential catastrophic events that could lead to immense harm and suffering. It does not inherently reflect a pessimistic view of the future but rather underscores the moral responsibility to prevent or minimize severe suffering.

    The moral urgency associated with S-Risk is rooted in the recognition of the potential destabilizing effects of scientific and technological progress on civilization (Bostrom, 2019). As such, the focus on S-Risk is not driven by pessimism but rather by a proactive approach to addressing the potential consequences of advancements in capabilities and incentives that could have destabilizing effects. This proactive stance aligns with the moral imperative to reduce severe suffering and prioritize the well-being of individuals and societies.

    Assessing the seriousness of risks involves considering their expected value, which is the product of scope and the probability of occurrence. S-Risks, given their potentially vast scope and a non-negligible probability of occurrence, could outweigh present-day sources of suffering, such as factory farming or wild animal suffering, in expectation (Baumann, 2017).

    Baumann (2017) identifies that the limited attention given to actively reducing S-Risk is unsurprising, as these risks are rooted in abstract considerations about the distant future, lacking emotional resonance. Even individuals concerned about long-run outcomes often focus on achieving utopian outcomes, directing relatively few resources toward s-risk reduction (Baumann, 2017). However, this also implies the potential existence of low-hanging fruit, making the marginal value of working on s-risk reduction particularly high.

    Distinct form Conspiracy Theories and Alarmism

    S-Risks, short for suffering risks, and conspiracy theories are distinct concepts that pertain to different domains of discussion. S-Risks refer to scenarios where advanced technologies or other developments could lead to outcomes involving vast amounts of suffering on a cosmic scale. These scenarios often involve unintended consequences, existential risks, or situations where suffering becomes widespread and severe (Bostrom, 2013). The concept of S-Risks is rooted in serious discussions within fields such as existential risk studies, ethics, and speculative philosophy (Taggart, 2023). It is not a conspiracy theory but rather a theoretical framework for considering the potential negative outcomes of certain developments.

    Conspiracy theories, on the other hand, are explanations or beliefs that attribute events or situations to a secret, often sinister, and typically deceptive plot by a group of people or organizations (Douglas & Sutton, 2018). They can cover a wide range of topics, from historical events and political occurrences to scientific advancements. They often involve the idea that there is a hidden truth deliberately being concealed from the public. Conspiracy theories can vary significantly in terms of their credibility, ranging from well-supported alternative explanations to baseless and unfounded speculations (Van Prooijen & Douglas, 2017).

    S-Risks are commonly discussed in the context of emerging technologies, artificial intelligence, and potential future scenarios where the well-being of sentient beings could be at stake (Bostrom, 2013). This concept is rooted in serious discussions within fields such as existential risk studies, ethics, and speculative philosophy. It is a theoretical framework for considering the potential negative outcomes of certain developments, especially in the context of advanced technologies. On the other hand, conspiracy theories involve beliefs or explanations that suggest secretive and often malevolent forces orchestrating events, which may or may not have a basis in reality (Van Prooijen & Douglas, 2017).

    Discussions around S-Risk involve considering potential scenarios and their implications for sentient beings, typically within the frameworks of science, ethics, and philosophy (Powell et al., 2022). S-Risks discussions aim to contribute to ethical and thoughtful considerations in the development and deployment of technologies to avoid potential negative consequences (Kreitzer, 2012).

    On the other hand, alarmism is characterized by the tendency to exaggerate or sensationalize risks or threats, often leading to unnecessary fear or panic (Bostrom & Yudkowsky, 2014). Alarmism lacks a rational basis and may involve the exaggeration of risks without a thorough examination of evidence or a reasonable understanding of the context (Bostrom & Yudkowsky, 2014). Unlike S-Risk, alarmism may not necessarily focus on responsible discourse or risk mitigation and may prioritize creating a sense of urgency or fear without offering constructive solutions (Cuyvers et al., 2011). While S-Risk involve a serious and reasoned examination of potential scenarios that could lead to suffering, alarmism tends to be a more exaggerated and emotionally driven approach that may not be grounded in evidence or responsible discourse (Bostrom & Yudkowsky, 2014).

    Furthermore, S-Risks and doomsday prophecies are two distinct concepts within the realm of future studies and existential risk. While both involve potential negative outcomes for the future, they differ in their focus, nature, and underlying assumptions. S-Risks specifically refer to scenarios where advanced technologies or other developments could lead to widespread and severe suffering, potentially on astronomical scales (Umbrello & Sorgner, 2019). The emphasis is on the well-being of sentient beings and the ethical considerations associated with potential future scenarios. S-Risks are grounded in the concern for avoiding or mitigating existential risks that could result in significant suffering. The discussions around S-Risk often involve ethical considerations, responsible research, and risk assessment.

    On the other hand, doomsday prophecies typically refer to predictions or beliefs about an impending catastrophic event that leads to the end of the world or human civilization. These prophecies often involve apocalyptic scenarios and may be rooted in religious, cultural, or speculative beliefs. Doomsday prophecies are often based on specific worldviews, cultural narratives, or interpretations of religious texts. They may not necessarily involve a rational or evidence-based assessment of future events.

    In summary, S-Risk are part of a discourse that encourages responsible development, ethical considerations, and risk mitigation, with a specific focus on avoiding scenarios that could lead to widespread suffering.

    Chapter two

    Types of S-Risks

    The concept of S-Risk, which refer to risks of astronomical suffering, has been identified by the Center for Reducing Suffering. These S-Risk can be categorized into three types: agential, incidental, and natural S-Risk (Hilton, 2022). Agential S-Risk arise from intentional harm caused by powerful actors, either due to a desire to cause harm, negative-sum strategic interactions, or indifference to other forms of sentient life. Incidental S-Risk, on the other hand, result as a side effect of certain processes, such as economic productivity, attempts to gain information, or violent entertainment. Lastly, natural S-Risk encompass suffering that occurs without intervention from any agent, such as wild animal suffering on a large scale across the universe (Lotto et al., 2013).

    The identification of these S-Risk is crucial in understanding and addressing potential sources of immense suffering. Agential S-Risk, for instance, highlight the ethical implications of intentional harm caused by powerful actors, whether towards other ethnic groups, other species, or other forms of sentient life. Incidental S-Risk draw attention to the unintended suffering that may arise as a byproduct of various human activities, such as economic productivity and scientific experimentation. Natural S-Risk underscore the potential for widespread suffering that occurs without any intervention, such as in the case of wild animal suffering (Lotto et al., 2013).

    Understanding and addressing these S-Risk is essential for developing strategies to mitigate and prevent astronomical suffering. By categorizing these risks, researchers and policymakers can work towards identifying specific interventions and ethical frameworks to address each type of s-risk effectively. This can involve developing ethical guidelines for powerful actors, implementing regulations to minimize unintended suffering from human activities, and

    Enjoying the preview?
    Page 1 of 1