Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Osiris, Volume 38: Beyond Craft and Code: Human and Algorithmic Cultures, Past and Present
Osiris, Volume 38: Beyond Craft and Code: Human and Algorithmic Cultures, Past and Present
Osiris, Volume 38: Beyond Craft and Code: Human and Algorithmic Cultures, Past and Present
Ebook751 pages9 hours

Osiris, Volume 38: Beyond Craft and Code: Human and Algorithmic Cultures, Past and Present

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Perceptively explores the shifting intersections between algorithmic systems and human practices in the modern era.

How have algorithmic systems and human practices developed in tandem since 1800? This volume of Osiris deftly addresses the question, dispelling along the way the traditional notion of algorithmic “code” and human “craft” as natural opposites. Instead, algorithms and humans have always acted in concert, depending on each other to advance new knowledge and produce social consequences. By shining light on alternative computational imaginaries, Beyond Craft and Code opens fresh space in which to understand algorithmic diversity, its governance, and even its conservation.

The volume contains essays by experts in fields extending from early modern arithmetic to contemporary robotics. Traversing a range of cases and arguments that connect politics, historical epistemology, aesthetics, and artificial intelligence, the contributors collectively propose a novel vocabulary of concepts with which to think about how the history of science can contribute to understanding today’s world. Ultimately, Beyond Craft and Code reconfigures the historiography of science and technology to suggest a new way to approach the questions posed by an algorithmic culture—not only improving our understanding of algorithmic pasts and futures but also unlocking our ability to better govern our present.

LanguageEnglish
Release dateJul 18, 2023
ISBN9780226827889
Osiris, Volume 38: Beyond Craft and Code: Human and Algorithmic Cultures, Past and Present

Read more from James Evans

Related to Osiris, Volume 38

Titles in the series (11)

View More

Related ebooks

History For You

View More

Related articles

Reviews for Osiris, Volume 38

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Osiris, Volume 38 - James Evans

    OSIRIS v38n1 coverOSIRIS_titleOSIRIS_editors

    Note: This e-book includes math tagged with MathML. For best display, use one of the recommended EPUB readers.

    Acknowledgments

    James Evans and Adrian Johns:

    IntroductionHow and Why to Historicize Algorithmic Cultures

    Machinations: Craft, Code, and Beyond

    James Evans, Tyler Reigeluth, and Adrian Johns:

    The Craft and Code BinaryBefore, During, and After

    Michael J. Barany:

    On RemediationMedia, Repair, and the Discipline of Fantasy in the Theory and Practice of Algorithmic Modernity

    Making and Breaking Rules: Prediction, Discovery, and Originality

    Stephanie Dick:

    The Marxist in the Machine

    Clare S. Kim:

    The Art and Craft of Mathematical ExpressionComputational Origami and the Politics of Creativity

    Alex Csiszar:

    Provincializing ImpactFrom Imperial Anxiety to Algorithmic Universalism

    Reckoning with Reality: Problems of Design and Control

    Honghong Tinn:

    Between Magnificent Machine and Elusive DeviceWassily Leontief’s Input-Output Analysis and Its International Applicability

    Salem Elzway:

    Armed AlgorithmsHacking the Real World in Cold War America

    Xiaochang Li:

    There’s No Data Like More DataAutomatic Speech Recognition and the Making of Algorithmic Culture

    Unruly Assemblages: Recursive Work and Algorithmic Tenacity

    Matthew L. Jones:

    Users Gone AstraySpreadsheet Charts, Junky Graphics, and Statistical Knowledge

    Alma Steingart:

    Statecraft by Algorithms

    Mike Ananny:

    Making MistakesConstructing Algorithmic Errors to Understand Sociotechnical Power

    Culture Encoded: Visions of Algorithmic Pasts, Presents, and Futures

    Hallam Stevens:

    Code and CritiqueTed Nelson’s Project Xanadu and the Politics of New Media

    Theodora Dryer:

    Settler ComputingWater Algorithms and the Equitable Apportionment Doctrine on the Colorado River, 1950–1990

    Ksenia Tatarchenko:

    Algorithm’s CradleCommemorating al-Khwarizmi in the Soviet History of Mathematics and Cold War Computer Science

    John Tresch:

    AfterwordMashed between Code and Craft: So Many Pictures of Food

    Notes on the Contributors

    Index

    Osiris vol. 38 (2023): v–v.

    Acknowledgments

    The distant origins of this volume lie in a conference that took place at the University of Chicago in 2018. The conference marked a transition between two projects funded by the Mellon Foundation, one devoted to Disciplines and Technologies and the other to Algorithms, Models, and Formalisms. We are grateful to the foundation for its support, and to the Franke Institute for the Humanities at the University of Chicago for hosting both the conference and the two projects. Conversations at that meeting developed and ramified over the ensuing years, and we thank all who participated both at the event itself and in the discussions it inspired. In bringing this volume into being, we have benefited greatly from the assistance and advice of two editorial boards—the first comprising W. Patrick McCray and Suman Seth, the second Elaine Leong, Projit Mukharji, Ahmed Ragab, and Myrna Perez Sheldon—whose help at every stage has been invaluable. Two anonymous referees provided vital and insightful comments on an earlier version of the volume. In addition, Adrian Johns’s share of the project was largely carried out during sabbatical leave made possible by the support of the National Endowment for the Humanities under grant FEL-267420-20. Most of all we are grateful to our contributors, who have borne with a long publication process and whose hard work and creative imagination are evidenced in every page of this book.

    © 2023 History of Science Society. All rights reserved.

    Osiris vol. 38 (2023): 1–15.

    IntroductionHow and Why to Historicize Algorithmic Cultures

    James Evans and Adrian Johns*

    *Evans: Department of Sociology, University of Chicago, Chicago, IL 60637; jevans@uchicago.edu. Johns: Department of History, University of Chicago, Chicago, IL 60637; johns@uchicago.edu.

    Abstract

    This introduction summarizes the themes, purposes, and objectives of the volume. Its thesis is that the prevailing mode of discussing the nature and impact of algorithmic culture in modern society evolved from existing trends in labor history and the history of science, one distinctive element of which is a sharp interpretative binary between craft (soft, human judgment) and code (hard, mechanistic logic). The volume suggests that this binary is no longer adequate. It suggests forging a new vocabulary of algorithmic and human diversity in the code-craft space—one better equipped to guide historical investigation, allow us to grapple with algorithmic governance, and even improve our understanding of forces underlying algorithmic advancement.

    How did you, our reader, come across these words? It is reasonable to predict that most of you will have come upon them thanks to the mediation of some algorithmic system, be it Google’s search page, a library catalog, JSTOR, or some similar tool. More than likely, you will have benefited from several such systems, operating in sequence and in sync, alongside the exercise of human judgment by not only yourself but also communities of editors, librarians, online book merchants, and algorithm designers. The subjective sense of an author addressing a reader in immediate, almost one-on-one terms exists nowadays only thanks to such multiple mediations. And what is true of this act of reading is true of a vast range of the experiences we undergo all the time. Indeed, experience itself could almost be said to have changed its meaning in our lifetimes, so thoroughly have algorithmic guidance, mediation, surveillance, collection, and analysis shaped our behaviors.

    That algorithms play an enormous role in shaping everyday life in the twenty-first century is hardly news. Ours is an age in which so-called artificial intelligence in the form of machine learning is ubiquitous, its influence so pervasive as to be hard to distinguish from our own decisions and desires. Algorithms, as New Scientist advised its readers in 2021, run your life.¹ We have become accustomed to engaging with such systems in numberless mundane ways, from selecting movies and music to being guided to our destinations when driving. By the same token, we are also used to encountering discussions, reports, and analyses that highlight the powers that have been ceded to algorithms in so many of the decisions essential to modern society. In the criminal justice system, facial recognition and sentencing recommendation systems have become notoriously common. Healthcare and medical research institutions use algorithmic tools to diagnose conditions, recommend therapies, and search for new treatments. And social welfare programs try to eke out their scarce resources by embracing algorithms the makers of which promise allocation decisions that are objective, neutral, and fair by virtue of being arrived at by encoded criteria applied to standardized information. Even the apparently benign engines that mediate between us and creative culture are silently selective, shaping as much as assisting as we try to learn.² Meanwhile, in the research world the adoption of algorithmic machine-reading systems has been declared nothing less than a new kind of science, in which the old emphases on theory and observation no longer apply. Machine readers can scan massively more information than human readers ever could, and do so across disciplinary boundaries that human experts almost never breach. Who needs theories, or even guiding reasons, when algorithms can search out correlations across vast tranches of data and even generate the hypotheses to guide (re)search?³

    All this is familiar, because algorithms have become a popular subject of both academic and journalistic attention for years.⁴ There is widespread awareness of how much our lives are structured by these technologies, along with equally widespread curiosity about the consequences. Unsurprisingly, much of the work published to this point, especially in the broader public media, has adopted either a boosterish tone or a monitory one, with the latter prevailing recently. It has also become strongly normative in character. It has focused on identifying what is new and powerful about the expanding worlds of machine learning and data science, and in recent years what is ominous about them. While there is no lack of arguments for nations to adopt policies aimed at bolstering the development and deployment of artificial intelligence (AI) systems in the name of economic growth, the more notable trend has been to call for measures that protect against those systems, or at least render them accountable. Both schools of thought are attended to as legal and policy frameworks governing key issues such as transparency, privacy, equity, and data preservation are subject to renegotiation in Washington, London, Strasbourg, and other capitals. It is important to note that a good deal of the commentary in books, newspapers, and other media has been impressively well informed on both technical and social levels. Indeed, for all that scholars are accustomed to lamenting the failings of popular journalism, it could be argued that this is an exemplary case demonstrating the critical importance of communication between expert communities, legal and political authorities, and the broader public when dealing with matters of common interest.

    Readers who have been paying attention will therefore know much about the social impacts of algorithmic culture. They will be well aware that algorithms used in the criminal justice system have entrenched biases against minorities, that disabled and elderly citizens have been the subject of apparently arbitrary withdrawals of vital benefits—some have reportedly died as a result—and that algorithms are major instruments of surveillance capitalism that allow oligarchic corporations (and their clients, which include states and police forces) to track and predict our every online action and many of our offline ones too.⁵ Some published treatments of these topics amount to jeremiads, but others have made more or less practical suggestions for actions to intervene in the apparently endless expansion of algorithms’ power over human subjects. They have called for legislative measures to underwrite the transparency and accountability of algorithmic decision-making, and for citizens to embrace practices of obfuscation designed to distract or confuse such systems, at least for a while.⁶ These revelations and calls for action are valuable, and they are likely to become more so in the future. They articulate genuine matters of concern, and they deploy historical and sociological evidence to do so. Yet the genre is not, in the end, a historical one. And historians of science who are interested in the culture of algorithms—their making, character, fortunes, and consequences—are likely to find it monotonic and essentialist, as if algorithmic culture had a singular character and moved in one direction to a foreordained end.

    The predominant representation in this literature, until the late 2010s at least, was that of a technical culture in the making. But today that culture has in fact been made. Ours is now an inescapably algorithmic world, in which the routine experience of algorithms affects everything we can discover, create, know, or forget, and the mistakes that we may make on the way. We need to grapple with this reality historically—as something that emerged through time, as a complex and multifaceted achievement with many paths not followed that, if observed, could become available again in future. The appropriate tools to tackle this job are those of the historian of science. At the same time, this reality inevitably impinges on our own historical sensibilities, willy-nilly: practices of research, collaboration, publishing, and teaching have all changed radically. So the task involves an essential recursion. What we think about entities broadly taken to be pre-algorithmic—in particular, the kind of human, culture-based reasoning often said to be imperiled by the rise of algorithms—will necessarily be affected, and increasingly so, by the algorithmic culture in which our knowledge of those entities is forged and communicated. That has profound philosophical and ethical implications inseparable from the historical epistemology we bring to bear. As Stephanie Dick points out in her contribution to this volume, humanness is relative. What we know about and how we represent human reasoning reflects what we know about and how we represent artificial reasoning. Progress will consequently depend on addressing several questions simultaneously. How do we understand algorithmic culture historically, using the tools of the historian of science? How does our algorithmic immersion affect those tools themselves? What is the historian’s role vis-à-vis matters of contemporary moral concern? How do historically alternative algorithms and human logics help us rethink the algorithmic present and future? In short, what does the history of science look like in an algorithmic age?

    In this volume of Osiris, we seek to articulate some of the questions algorithms pose for historians of science, and to chart a range of approaches—concepts, terms, and methods—they may use to explore the emerging field of algorithm studies. We also seek to identify some ways in which engagement with the culture of algorithms as a historical subject may provide insights useful to historians of science in general. Themes of labor, information, classification, communication, transparency, responsibility, and credit pervade scholarship about modern algorithms. Exotic as the subjects of machine reading and learning may seem at first, they are addressable by the techniques of the historian of science, and the practice of addressing them generates new perspectives on those themes in other contexts. It may be that the future of the history of science as a field, in fact, lies on the far side of a juncture of attending to algorithms.

    If there is a common strand to existing treatments of algorithmic culture—especially, but not only, those encountered by the general public—it is their framing of the subject in terms of a radical distinction between two kinds of reasoning. Their characteristic element is a contrast drawn between a non- or pre-algorithmic culture of human judgments—a realm of craft—and a culture of hard, inflexible, and often unaccountable logics—a realm of code. The problems of big data policing or algorithm-based social programs are posed in terms of rich, multivalent human cultures of decision-making, in which discretion and subjectivities have a place, being supplanted by mathematical systems that may be rigorous but are also opaque, inflexible, insensitive to peculiarities of context, and immune to appeal.⁸ This binary has powerful intuitive appeal, and it does capture something genuinely consequential about the processes at hand. It gives us purchase on injustices that exist and should be addressed. Yet we argue that it has outlasted its usefulness. It is becoming increasingly clear not only that such a hard-and-fast binary fails to accommodate important aspects of our contemporary world but also that it should itself be recognized as a product of a historical process as well as a way of presenting that process. In particular, it bears the imprint of a long tradition of interpretation in social history, labor history, and, especially, the history of science and technology.

    A principal reason why the craft/code distinction has been so prominent is that it reflects the social history of science’s development and purposes as a field defined in the postwar era. As that field emerged and took on institutional forms, especially but not uniquely in Anglo-America, so it laid claim to intellectual, moral, and political stances in terms of which this distinction not only made sense in its own right but was useful in organizing key topics and making its findings practically compelling. Acknowledging this leads to reappraisals of the history of algorithmic culture, then, but it also requires us to revisit leading traditions in the social history of science itself. It may be a step toward creating a new set of concepts and practices appropriate for a generation seeking to understand algorithmic civilization and its predecessors, to do so from within, and to recognize not the singularity but the diversity of algorithmic imaginaries underlying the present and future—just as we routinely do for human decisions. And it should be evident that this acknowledgment of diversity flies in the face of futurist declarations that computation is moving us inexorably closer to the moment of singularity when the universe awakens.

    Attending to algorithms forces historians of science to confront anew two issues that prior generations had to deal with in different contexts: those of the relationships between analysis and advocacy and between scholars and practitioners. A perennial issue for historians of science who have been concerned about environmental matters and issues of race, class, and gender, the question of advocacy is no less pressing here.⁹ As in those contexts, we can insist that anxieties about the overweening power of algorithms of the kind that have motivated so many published accounts are by no means incompatible with historical sensibilities. Given that values were embedded in the development of AI/machine learning systems from the outset, one important role for the historian is to render that fact visible and legible, producing a kind of revelation that has the potential to be socially ameliorative as well as epistemologically consequential.¹⁰ Historical research is always driven by some present purpose, after all, and this is certainly a good one. By the same token, several authors here—most obviously Ksenia Tatarchenko, but also Alma Steingart, Hallam Stevens, and others—point out that the builders of algorithms have repeatedly done historical work themselves, as they sought to construct futures by stipulating pasts. But that did not make them historians. The need to approach the past in good faith—to be receptive to lost voices, to hear unexpected things said by prominent ones, to appreciate that the problems of past actors were as complex and real as ours, and to see our issues as emergent from those past—is a prerequisite for the historian as it may not be for the machine learning designer. If the historian has something to contribute to current debates about policy, ethics, and technological design, it is this dedication to seeing situations in their temporal particularity—which means engaging with advocacy itself on equal terms. All sides, not just one, are surely historically emergent.

    For an unavoidably technical subject like today’s algorithms, this is often going to mean engaging with practitioners. Answers to society’s questions about algorithmic culture are likely to emerge from conversations between those who design machine learning systems and those who study, experience, and are subjected to them. A major point of this volume is to argue (as Frank Pasquale has in a recent survey) that we should view algorithms in terms of managed assemblages of human, institutional, and technical components.¹¹ Collaborating with technoscientific professionals can of course be laborious, time-consuming, and frustrating, as well as informative. It is also liable to make one vulnerable to insinuations of complicity. We should not be content to leave the field of contention to a nascent enterprise of algorithm ethics that promises to be as problematic as that of biomedical ethics. As history itself is now an algorithmic enterprise, moreover, we have no choice but to negotiate as equals or resign ourselves to the role of consumers. The only responsible course is to address this audience, as new ventures like MIT’s series of Case Studies in Social and Ethical Responsibilities of Computing are starting to do.¹² To paraphrase one of the earliest boosters of the digital age, Stewart Brand, we are bound to be algorithm-jockeys anyway, so we had better get good at it.¹³

    How to Read This Book

    The origins of this volume lie in a long-running series of projects at the University of Chicago aimed at investigating, first, the historical relationships between technologies and intellectual disciplines, and, second, the historical importance for such disciplines of diverse algorithms, models, and formalisms. Several of the chapters published here were originally presented as papers at a conference focused on the second of these projects. Taking as their starting point a shared sense that the traditional binary of craft and code needed to be supplemented by new approaches, they tackle a range of instances in the history of algorithmic systems to identify the advantages and drawbacks of that framework and propose future alternatives.

    This means that the concept of algorithm used here is a variegated one. In the existing literature there are several definitions of this term in circulation, some of which emphasize the centrality of explicit rules and others a broader notion of calculational practice. It is also evident that the meanings of the term have changed quite markedly since it was first adopted in English (as Michael Barany shows here) in the sixteenth century. One survey of early uses of the term indicates that it carried various connotations even in the early years, from the art of reckoning with Cyphers (1658) to a general reference to the six principal rules of computation with numbers: numeration, addition, subtraction, multiplication, division, and the extraction of roots (1702).¹⁴ The historian should obviously be attuned to the range of these meanings, then as now, and be prepared to accord the historical epistemology of such a term the respect it deserves. The contributors to this volume, accordingly, do not restrict themselves by declaring allegiance to any one definition.¹⁵ This allows them to range freely across what are often seen as fundamental chronological discontinuities, the most obvious of which is that demarcating our age of digital and networked information from previous eras. By not taking this discontinuity for granted, our authors pose questions of the nature and extent of the change involved, and suggest continuities with diverse early algorithmic approaches that might otherwise go unregarded. It is salutary to recognize the extent to which algorithmic procedures had achieved stable authority long before the advent of avowedly big data. Rule-based protocols applied to centralized information holdings were prominent in professions such as medicine decades before digital computing; indeed, in some contexts it may be that the increased attention paid to them in the 1980s–1990s actually eroded their credibility.¹⁶

    The volume comprises five sections, each of which tackles a key theme for understanding and transcending the craft/code dichotomy. These themes build sequentially into a coherent programmatic statement for the field, placing algorithmic studies in the context of historiographical traditions in the history of science and suggesting ways in which those traditions may develop in future years. The first section, Machinations, looks at transitions between human craft and mechanical code—at how agents in history have sought to understand the distinction, and how they have tried to manage the transposition of knowledge from humans to machines and vice versa. This provides an archaeology of the dichotomy and its applications. The second section, Making and Breaking Rules, then examines the most obvious defining feature of algorithms—their character as sequences of logical rules—and asks how actors have sought to reckon with that deterministic character in contexts when what is valued is in some sense unpredictable. What kinds of rule breaking have been accepted and embraced, and why? How does the claim that algorithms represent neutral, universal rationality get reconciled with real-world rival algorithms that emerge in competition with humans and each other? Section III, Reckoning with Reality, carries on that vein of questioning to ask how algorithms, having been designed and tested with artificial and simplified information, fare when let loose on the outside world, and in particular how their interactions are managed. It is the assemblages of human and machine resulting from such efforts that operate so influentially in our world. The fourth section, Unruly Assemblages, focuses explicitly on these assemblages, looking in detail at how algorithms and humans—both experts and laity—have responded and adapted to shifting circumstances. And section V, Culture Encoded, finally, takes a broader perspective to explore how all these factors—the managed transitions between craft and code, the delicate juxtaposing of objectivity and originality, the need to cope with unruly realities, and the designing of hybrid assemblages—were underwritten by, and in turn inspired, work on large-scale cultural and political concepts that became central to contemporary algorithmic culture.

    In order to create new approaches, it will be necessary to understand more accurately and appositely what came before. We need to grasp where our current frame of reference came from, why it is so powerful, what it draws our attention to, and from what it distracts us. The first section of this volume is devoted to this task. It attends to the history of the craft-versus-code dichotomy, exploring how and why that binary came to loom so large, first in histories of science and technology in general, and then in studies of algorithmic culture. James Evans, Tyler Reigeluth, and Adrian Johns begin the section by tracing the binary back to the Industrial Revolution and the representations of mechanical rationality published then by authors like Andrew Ure and Charles Babbage. From there they trace how skeptics of such industrial champions—Karl Marx and William Morris among them—sought to highlight the intellectuality of artisanal craft, setting in train a long-lasting contest over the nature and geographies of creative work. The distinction consequently came to play a significant part in the emergence of social histories of science and technology in the twentieth century, such that for many historians of science displaying the obscured labor upholding scientific knowledge became a defining purpose of the discipline itself. The point of the exercise, they saw, was to reveal the local, practical, situated, and material craft on which purportedly universal scientific logics rested. In the same generation, tellingly, boosters of early AI were arguing fiercely over their own attempts to characterize human and algorithmic reasoning practices. Those early AI pioneers were strongly concerned with what it was for humans or machines to learn, and it was often in that concept that code- and craft-based notions converged and clashed.

    The other contribution to section I, Michael Barany’s On Remediation, looks at the ineffable role of labor—of repair-craft—in sustaining even the most apparently abstract and universal forms of knowledge in mathematics. In this way, Barany extends a long tradition of work pointing to the ways in which algorithms capture only aspects of the world that have already been rendered algorithm-like—a tradition extending back from recent authors like Meredith Broussard and Erik Larson, via Harry Collins and Martin Kusch, to the early AI critiques of Hubert Dreyfus, and thence to the Romantic Marxists of earlier eras.¹⁷ But Barany’s approach is distinct and original. He ventures back in time to the adoption of terms such as algorithm and arithmetic in English in the sixteenth century, pointing out that remediation was intrinsic to the meanings of those terms throughout. Barany’s argument demonstrates that such work often involves managing transitions between media—from blackboards and notepads to overhead projectors and offprints—and that the universality of algorithms is not only manifest in but made possible by the sheer abundance of media across which they extend.

    Taking up these issues in section II, Stephanie Dick asks how a major figure in the history of mechanizing mathematical proofs tried to buy originality with plodding. Hao Wang was trained in the most rigorous of analytical philosophy traditions, yet came to feel frustrated by the lack of connection between formal mathematical theorization in the manner of Bertrand Russell and Alfred North Whitehead and the practices of actual mathematical researchers. In effect, he came to believe that the analytical philosophers—and then the AI pioneers who adopted their view of reasoning—wanted to do to human thinkers what Ure had wanted to do to human laborers. He collaborated with computer companies in a bid to build a different kind of algorithmic theorem proving that would proceed by what he called pattern recognition and in doing so give greater respect to the craftlike character of mathematical practice.¹⁸

    That character was also at issue in the legal conflict over origami designs waged by artists and mathematicians whom Clare Kim discusses in the chapter that follows Dick’s. Like several of our other authors, Kim is interested in how creativity and encodability might be reconciled. The example she uses is mathematical origami—a highly sophisticated technique for designing origami models using formal algorithmic procedures, which can generate new designs. The epistemological and cultural connotations of this were made explicit when the scientist who developed the method sued an artist who had appropriated his designs in paintings displayed in an exhibition. Here Kim extends her focus beyond the binary of code and craft to that of code and art. Where did artistic creativity reside in this complex field? Moreover, in another move that several of our authors reinforce, Kim remarks that such binaries are all too often construed by reference to notions of creativity, value, and aesthetic worth particular to Euro-American historical contexts.

    Alex Csiszar, the author of the third chapter in this section, extends that point by asking how science itself came to be seen as a global endeavor—and its progress measured quantitatively—by means of cataloging and indexing enterprises that emerged in the age of industrial empires and exploited the kinds of transitions from media that Barany highlights, from card index to journal to correspondence. Here, too, what champions lauded as an objective demonstration of the productivity of science turns out to depend on certain algorithmic protocols that could be, and were, contested by rival enterprises based on assumptions that obtained in non-Western settings.

    Our third section pursues this question of how to reconcile formal rule-based algorithms with messy and often culturally diverse real-world settings further. Honghong Tinn directly addresses one of the most powerful algorithms of the mid-twentieth century: Wassily Leontief’s input-output analysis. The technique, which involved producing equations expressing the economic inputs and outputs of a nation’s industries and solving them simultaneously to arrive at recommendations for policy, was widely adopted and imitated: versions of it reappear here in Theodora Dryer’s account of settler computing and Csiszar’s of efforts to measure the production of science across national economies. Leontief’s system lent itself to computers—he was apparently the first social scientist to make earnest use of the new machines—and he came to think of economies themselves as effectively giant calculating engines, constantly feeding themselves numerical problems and generating solutions. Tinn shows how, as the procedure came to be adopted by different countries, so controversy arose about the kinds of information it demanded. Was the kind of data furnished by undeveloped nations equivalent to that produced in the United States? Once again the need for an international, multicultural accounting is apparent. The very question of an algorithm’s universality came into play, and it did so in the context of universal data. Contrary to positivist representations, data did not come first with algorithms being devised to analyze them. Algorithmic culture in this case produced the practice of standardized data collection.

    Salem Elzway’s examination of robotics brings a different perspective on the question of how the interactions of algorithms and the world are managed. Elzway returns to the early era of AI in the 1960s and explores work on the robotic perception and manipulation of objects. Practitioners in this field, like those of AI in general at that time, made extravagant claims for the potential of their experiments, but the successes of their robotic arms relied on their being deployed in highly circumscribed artificial settings. Elzway looks at the laborious work done at MIT and the Stanford Artificial Intelligence Laboratory to enable robots to operate in microworlds. Strategies then had to be created to use them in efforts to mechanize and automate industrial manufacturing. The aim was no longer to make artificially intelligent robots act like humans, here, but to replace humans already disciplined into performing repetitive routines like robots. It was reminiscent of the efforts of Ure and Babbage, in that human artisanal autonomy—what was left of it—would be supplanted. In practice, of course, workers had to spend long hours remediating the machines, generating a system that functioned only as an assemblage of humans, computers, and automata.

    In the final chapter in this section, Xiaochang Li provides a fascinating account of efforts to use AI algorithms for speech recognition. In parallel to Elzway’s robotics experimenters, advocates of speech recognition made grand claims for the potential of knowledge-based algorithmic systems to apprehend human speech, only for their actual achievements to fall short to an increasingly scandalous extent. John Pierce famously denounced boosters of the field as acting like mad inventors whose crazy schemes no longer warranted trust. The upshot was a different kind of assemblage. Starting in the early 1970s, IBM—which, at a time when there was a risk of demand saturation in the marketplace, embraced problems that required substantial computing power—sponsored a venture that abandoned the old AI ideal of replicating human reasoning practices in code and instead returned to a long-discredited method of brute statistical force. The model of ignorance, as it was called, involved having the computer identify patterns in large amounts of information, without stipulating anything like grammatical or syntactical rules in advance. Here lies the origin of perhaps the most striking and consequential elements of contemporary machine learning systems—the fact that they are agnostic about the real ontological structures of the subjects they address. Contrary to popular representations, which imply that such systems involve representations of the real structures of the phenomena they tackle, machine-learning algorithms pointedly disregard such structures. To an extent, this is precisely why they can seem universal and objective. It is also why algorithms are often treated as portable from one domain to another—from bioinformatics and radiology to private equity insurance. (As one pioneer put it coyly, the blackboards of Wall Street quants looked just like those of AI researchers.) And as in the case of Leontief’s procedure, this meant that the assumed nature of the algorithm compelled the accumulation of information—data—with which that algorithm could work. Human labor replaced by the computer would be largely reinstated at the level of processing and curating such data.

    If algorithms call forth the data on which they work, and if they need to be managed actively to sustain relations with the physical and social worlds in which they operate, then attention needs to be paid to the character of that work. Who does it, and when is it made visible? What values are brought to bear in appraising algorithms’ virtues and vices, by whom, and to what effect? These are some of the questions addressed in section IV. This section highlights the kinds of work that algorithms require if their credibility as uniform, neutral, and objective is to endure. Matthew Jones, first, takes on the exquisitely awful phenomenon of lay-generated Excel graphs, as an example of how inexpert users—businessmen, financial workers, and hapless academics—make unpredictable use of such systems. Jones traces the history of these graphing prosthetics back to the kind of cognitive science that emerged in the postwar era and that was central to early AI. Visualization was seen then as a key practice in the kind of exploration that reasoning humans were believed (or hoped, or—aspirationally—taught) to practice. Learning itself was seen as fundamentally exploratory, and the ability to interact dynamically with graphical representations of statistical data was held up as a commercially important example. No longer would the thinker be radically separated from the data, as had been the case in older batch-processing computing centers.

    In Alma Steingart’s tale of the quarrels over apportionment in the 1920s, users were United States congressional representatives faced with mathematicians touting rival algorithms for allocating numbers of seats to states in upcoming elections. If current discussion centers on the importance of ensuring that algorithms be fair and transparent, the apportionment controversy carries a cautionary tale that fairness has no one definition and transparency is in the eye of the beholder. There is, in other words, a historical epistemology of fairness. Steingart’s story helps us reframe contemporary debates over algorithmic fairness as demographic parity versus predicted probability versus equally realized opportunity.¹⁹

    Mike Ananny makes a cognate point. By looking at three recent examples—a temporary drop in United Airlines’ share price after an internet report of an earlier bankruptcy resurfaced as if new, a problematic juxtaposition between Grindr and a sex offenders’ registry app, and a tricky problem facing academics charged with using online invigilation software during the COVID pandemic—he shows that errors made by algorithmic systems can be meaningful in their own right. They cast light not so much on the technical workings of the algorithms themselves—which in fact remained opaque in his examples—as on the assemblages in which they participate. It is often hard to stipulate exactly where an error has occurred, Ananny shows, or even that an error has in fact occurred at all. The work of remediation turns out to be revealing of the extent to which algorithms themselves are cultural objects. Disentangling them from their contexts is laborious and hard to sustain. In consequence, an error in an algorithm might well lead to recognition of problems not in the technical algorithm itself but in its mode of embedding. Errors might even be some of algorithms’ most useful functions.

    The final section looks at ways in which algorithmic thinking has tackled what might be characterized as big picture themes. What does it mean to live in an algorithmic age, and how do values like justice and equity fare in it? How do participants work to understand the development of an algorithmic age in terms of short- and long-term histories? Hallam Stevens’s contribution tackles such questions by reexamining one of the more puzzling founder figures of modern digital culture: Ted Nelson. Nelson’s vision for a hypertextual environment (which went by various names, the best known being Xanadu) has often been seen as one of the paths not taken of modern computing history. Stevens argues that Nelson’s work should be seen not as a failed venture in code but as a series of interjections about how human creativity and expression could thrive in an age when algorithms threatened to constrain imagination. Like the developers of lay graphing programs discussed by Jones, Nelson placed a high value on visualization as a form of cognition, the application in his case being to text creation and editing.²⁰ He insisted that for the sake of liberty, laypeople must be able to use computing devices as freely as possible, without being subjected to the kind of deadening control that a monolithic code environment threatened to introduce. In Stevens’s treatment, Nelson becomes a radical thinker about the nature of human cognition, and one worth reckoning with. He becomes a critic of communications revolutions comparable—and when it came to understanding computers, superior—to Marshall McLuhan.

    With Theodora Dryer’s article we squarely confront the question of how research on the history of algorithms engages with issues of social justice and equity. She explores the role of algorithmic reasoning in reshaping space and power in the American Southwest. Dryer shows that the US Bureau of Reclamation put the kind of analysis Leontief created to use in the Colorado River Basin to underwrite racial inequities through decisions about water resources. As she argues, a form of settler computing was experimented with extensively in this landscape. Through the appearance of neutral and objective calculation of resources and outputs, in an ongoing process of optimization planning, it systematically deprived Indigenous peoples. Once again, moreover, it was not that the data existed and the algorithm was designed to make use of it but rather that the algorithmic protocol stimulated the generation, collation, and preservation of standardized data to feed it. Research universities, it is important and sobering to note, were prime venues for this process.

    Dryer insists on the importance of a spatial dimension to this history. Accounts of algorithmic systems tend to decontextualize them in the specific sense that they lose their locations. Attending to issues of space has been a signature trend among historians of science for a generation now, and her demonstration that the same need exists for algorithm studies is a correspondingly important point for scholars in this field. With Ksenia Tatarchenko’s chapter we again find that it is vital to pay attention to the politics of space in order to understand a major moment in algorithmic history. Having started the volume with Barany’s account of Tudor mathematician Robert Recorde and the first introduction of the term algorithm into English, we end it with Tatarchenko’s analysis of an event held in the last years of the Soviet Union to commemorate the man for whom the term was originally coined: Abu ʿAdallah Muhammad Ibn Musa al-Khwarizmi. In a remarkable convergence of interests, Donald Knuth—the champion in America of seeing computer science as a branch of algorithmics—allied with counterparts in the Soviet system to mount a celebration of al-Khwarizmi in Uzbekistan. The event consolidated a Soviet account of the history and social purpose of mathematical practices, which had been in the making since the 1920s and for which the works of al-Khwarizmi were a key resource. By the latter days of the USSR, the history of mathematics—taught expressly as algorithmics—had been made a compulsory subject in Soviet universities. Tatarchenko’s account is a resounding demonstration of what can be achieved by seeing the culture of algorithms as historical through and through.

    The scale of the themes in this last section also characterizes the afterword John Tresch contributes to conclude the volume. In a speculative and whimsical vein, Tresch too argues that algorithms are culture, and that their apparent uniformity, neutrality, and objectivity are matters of management. One theme that has emerged from the volume is that the distinction between craft and code in algorithmic settings is often an index of the work of accommodation and correction that enables people and machines, and thinkers and doers, to operate in concert. With Tresch’s help we begin to see that kind of work going on all around us.

    Where Next?

    This collaboration suggests pointers to possible future initiatives. Key to such initiatives is the need to appreciate the craft/code binary itself as a historical achievement. While still accepting its utility, we may become more attuned than hitherto to the stakes behind its adoption and the varying meanings it has been accorded in different times and places. All of the chapters in this volume speak in their different ways to the value of this understanding. In this sense they are very much in line with work in the history of science done since 2000. Such work has repeatedly drawn pointed attention to the spatial and scalar aspects of distinctions cast by actors and observers between (intellectualized) management and directed labor. One thinks, for example, of Simon Schaffer’s elegant excavation of the place of assaying as a kind of intercultural communication venue in imperial contexts.²¹ What happens when the perennial effort to capture culture in some coded form transits the spatial bounds of jurisdiction and epistemic authority? We badly need more studies of algorithmic imaginaries as they move between cultures, along the lines explored by Kim, Dick, Tinn, and others here. (We also need more place-based ethnographic excavations of algorithmic work, for that matter, in the tradition of laboratory studies of the sciences.²²) In general, this kind of attention to the distinction between craft and code as a situated, contested construct opens questions that might otherwise not be visible.

    A second suggestion is that we come to focus more on the interstitial, intermediary, and framed character of algorithmic work.²³ That is, we may identify our subject less as algorithms per se and more as sets of linkages and assemblages between heterogeneous agents—humans, institutions, machines, coded protocols, and more—which generally need to be monitored and tweaked to maintain an impression of mastery over environments. And those environments too must be disciplined, and rendered into standard, utile data, in order to be amenable to the assemblages. Although the distinction between craft and code is too limiting if it is regarded as a single, essentialized key to all the algorithmic mythologies, looking for labor nevertheless likely remains a fruitful way to proceed in opening up these phenomena to historical analysis. Notions of information infrastructure have paved the way by highlighting the work of maintenance, but there is room for a more flexible and open-ended guiding concept. Barany’s idea of remediation, for example, encourages us to recognize the ceaseless integration of labor with formal logic structures. There is scope here for work to furnish insights into how diverse algorithmic and humane cultures depend on and shape each other.

    A key site for this kind of initiative is the meeting of algorithm and world—a meeting that may be thought of as occurring at the moment of datafication. A positivist view of algorithms represents them as coded logics operating on preexisting information—information taken to be practically coextensive with empirical reality. In fact, as several writers here point out, algorithms have often needed to bring their data into existence, and they have struggled to gain traction when that need was not recognized and acted upon by other agents. This observation links our volume to the issue of Osiris titled Data Histories, which was edited by Elena Aronova, Christine von Oertzen, and David Sepkoski and published in 2017.²⁴ Certainly, algorithms imply data—and everything that goes along with data, including their collection, curation, standardization, preservation, circulation (or secrecy), ownership, and correction—while data imply algorithms. As remarked above, the agnostic character of contemporary machine learning systems is perhaps their most epistemically consequential aspect, and that character relies on all the subaltern practices of data processing and preprocessing. It means that one of the most interesting problems for historians of algorithmic systems inverts a key question that dogged historians of science in the post-Kuhn era. Where they were intrigued by incommensurability (were there any rational grounds for choosing between radically different paradigms?), we face a problem of commensurability: how can radically different domains of reality be amenable to common encoded epistemologies? Just as assertions of incommensurability can be approached as moves situated in discursive, practical, and agonistic contexts, so it will be equally fruitful to regard assertions of commensurability as situated, discursive, practical, and agonistic in their own right.²⁵

    As elements, agents, and subjects of history, algorithms are rapidly becoming normal subjects of investigation for historians of science. We simply cannot afford to see them as exotic and impenetrable, as singular and undifferentiated. As they have occupied center stage in our culture, so our historicizations of them take on a more critical edge, as we can insist that they are complex, potent, multivalent, and flawed entities. The emphasis now is on algorithms as culture (as Ananny puts it), with all the rich connotations that implies. They are varied, and they are created, operate, and change in conjunction with human actors and institutions. In that sense, recommendations for transparency and accountability, well meant as they are, may well come to seem merely first steps. We have the responsibility to ask what kinds of transparency—via what lenses—make a difference: to whom algorithms may be accountable, how, and by what criteria. Moreover, this is not least a responsibility that we owe to ourselves. As scholars and teachers, we rely heavily on algorithmic systems to research and communicate what we know. In calling for a historiography of algorithms that is sophisticated enough to cope with the constitution of data, the making of machine learning systems, and the deployment, remediation, and consequences of both in all their contexts—in short, an end-to-end history, to stand alongside the end-to-end sociology that Michael Castelle and Jonathan Roberge have called for—we are not only attending to urgent social questions. We are also acknowledging the inescapable and urgent need of our own profession if it is to flourish in an algorithmic future.²⁶

    ¹Leah Crane et al., The Essential Guide to the Algorithms That Run Your Life, New Scientist, June 16, 2021, 34–39.

    ²The essays collected in Catherine Besteman and Hugh Gusterson, eds., Life by Algorithms: How Roboprocesses Are Remaking Our World (Chicago: Univ. of Chicago Press, 2019) provide many cautionary tales. For the examples of surveillance and content moderation, see also Cyrus Farivar, Habeas Data: Privacy vs. the Rise of Surveillance Tech (Brooklyn: Melville House, 2018), and Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (New Haven, CT: Yale Univ. Press, 2018).

    ³See, notoriously, Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, Wired, June 23, 2008, https://www.wired.com/2008/06/pb-theory/; and for representative criticism, Fulvio Mazzocchi, Could Big Data Be the End of Theory in Science? A Few Remarks on the Epistemology of Data-Driven Science, EMBO Reports 16 (2015): 1250–55.

    ⁴An exhaustive listing is impossible here, but see, for example, James Evans and Adrian Johns, The New Rules of Knowledge: An Introduction, Critical Inquiry 46, no. 4 (2020): 806–12, and the triptych of papers published there; see also Maarten Bullynck, Histories of Algorithms: Past, Present, and Future, Historia Mathematica 43, no. 3 (2015): 332–41, and Nick Seaver, Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems, Big Data & Society 4, no. 2 (2017): 1–12.

    ⁵E.g., Viriginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: Picador, 2019); Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (New York: New York Univ. Press, 2017); Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019), 376–415. For recent examples, see Erin McCormick, What Happened When a ‘Wildly Irrational’ Algorithm Made Crucial Healthcare Decisions, Guardian, July 2, 2021; Katherine B. Forrest, When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence (Singapore: World Scientific, 2021); Sarah Brayne, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford: Oxford Univ. Press, 2021), esp. 100–117.

    ⁶For a powerful critique of calls for transparency, see Mike Ananny and Kate Crawford, Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability, New Media and Society 20, no. 3 (2018): 973–89, esp. 982–84. For individual tactics, see Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest (Cambridge: MIT Press, 2015).

    ⁷For recent explorations of these issues see Thomas S. Mullaney et al., eds., Your Computer Is on Fire (Cambridge: MIT Press, 2021).

    ⁸A well-known example is Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016); many others could be cited. A particularly clear evocation to have appeared recently is the portrayal of journalists’ notion of the ineluctably craftlike judgment of newsworthiness in Taina Bucher, If … Then: Algorithmic Power and Politics (Oxford: Oxford Univ. Press, 2018), 138.

    ⁹One historian to pose the issue explicitly recently is Naomi Oreskes, in various published articles, e.g., Why I Am a Presentist, Science in Context 26, no. 4 (2013): 595–609.

    ¹⁰E.g., Yarden Katz, Artificial Whiteness: Politics and Ideology in Artificial Intelligence (New York: Columbia Univ. Press, 2020); and Ruha Benjamin’s extended review of the literature, Race after Technology: Abolitionist Tools for the New Jim Code (Cambridge: Polity, 2019).

    ¹¹Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Cambridge, MA: Harvard Univ. Press, 2020).

    ¹²MIT’s series is accessible at https://mit-serc.pubpub.org/. For an example of the nascent field of algorithm ethics, see Michael Kearns and Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design (Oxford: Oxford Univ. Press, 2020). See also comments in Michael Wooldridge, A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going (New York: Flatiron, 2020), 170–77; and in IEEE’s Journal of Social Computing two-volume special issue Technology Ethics in Action: Critical and Interdisciplinary Perspectives (IEEE & Tsinghua Univ. Press, 2021), vols. 2 and 3. The biases of AI systems are now being targeted as business opportunities for startup companies that see their remediation as a potentially profitable enterprise: Cade Metz, Fighting Bias Creep in A.I. Is a Magnet for Start-Ups, New York Times, July 1, 2021, B1, B5.

    ¹³Brand’s original statement, published in the Whole Earth Catalog, was that we are as gods, and we may as well get good at it, but he later corrected this in the context of climate change to warn that we had no choice but to do so.

    Enjoying the preview?
    Page 1 of 1