Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production
Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production
Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production
Ebook339 pages7 hours

Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Academic Crowdsourcing in the Humanities lays the foundations for a theoretical framework to understand the value of crowdsourcing, an avenue that is increasingly becoming important to academia as the web transforms collaboration and communication and blurs institutional and professional boundaries. Crowdsourcing projects in the humanities have, for the most part, focused on the generation or enhancement of content in a variety of ways, leveraging the rich resources of knowledge, creativity, effort and interest among the public to contribute to academic discourse. This book explores methodologies, tactics and the "citizen science" involved.

  • Addresses crowdsourcing for the humanities and cultural material
  • Provides a systematic, academic analysis of crowdsourcing concepts and methodologies
  • Situates crowdsourcing conceptually within the context of related concepts, such as ‘citizen science’, ‘wisdom of crowds’, and ‘public engagement’
LanguageEnglish
Release dateNov 15, 2017
ISBN9780081010457
Academic Crowdsourcing in the Humanities: Crowds, Communities and Co-production
Author

Mark Hedges

Mark Hedges is a Senior Lecturer in the Department of Digital Humanities at King’s College London. His original academic background was in mathematics and philosophy, and he gained a PhD in mathematics at University College London, before starting a 17-year career in the software and systems consultancy industry, working on large-scale development projects for industrial and commercial clients. After a brief career break, he began his career at King’s at the Arts and Humanities Data Service, before moving to his current position, in which he has taught on a variety of modules in the MA in Digital Asset and Media Management and MA in Digital Curation. His research interests include digital curation and digital archives, their role in research, and their relationships with broader research environments and infrastructures, and since 2012 he has been carrying out research on crowdsourcing and participatory methods in the humanities.

Related to Academic Crowdsourcing in the Humanities

Related ebooks

Social Science For You

View More

Related articles

Reviews for Academic Crowdsourcing in the Humanities

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Academic Crowdsourcing in the Humanities - Mark Hedges

    Academic Crowdsourcing in the Humanities

    Crowds, Communities and Co-production

    Mark Hedges

    Stuart Dunn

    Table of Contents

    Cover image

    Title page

    Series Page

    Copyright

    About the Authors

    Preface

    Acknowledgements

    Chapter 1. Introduction: Academic crowdsourcing from the periphery to the centre

    Introduction

    Crowdsourcing, citizen science and engagement

    Crowd connectivity: the rise of social media

    Methodology

    Chapter 2. From citizen science to community co-production

    The business of crowdsourcing

    Crowdsourcing in the academy

    Crowdsourcing and social engagement

    Communities of crowdsourcing: self-organization and co-production

    Terminologies and typologies for humanities crowdsourcing

    Chapter 3. Processes and products: A typology of crowdsourcing

    Humanities crowdsourcing: a typology

    Process types

    Beyond transcription: correcting and modifying content

    Asset types

    Task types

    Output types

    Conclusion

    Chapter 4. Crowdsourcing applied: Case studies

    Geospatial information

    Text

    Image

    Conclusion

    Chapter 5. Roles and communities

    Introduction and key questions

    Solitary roles versus collaborative roles

    Networks of roles

    Collaborative roles

    Roles and empowerment

    Roles and conflict

    Conclusion

    Chapter 6. Motivations and benefits

    Motivations, intrinsic and extrinsic

    From commercial to academic crowdsourcing

    The role of competition

    Learning and ‘upskilling’

    Gamification

    Community and social motivations

    Evolving motivations

    Motivations of academics and other project organizers

    Conclusion

    Chapter 7. Ethical issues in humanities crowdsourcing

    What do we mean by ethics in humanities crowdsourcing?

    Ethics and the crowdsourcing industry

    Labour and exploitation in humanities crowdsourcing

    Whose data is it anyway?

    Pastoral concerns and participant well-being

    Crowdsourcing as participatory research

    Community-based participatory research

    Conclusion

    Chapter 8. Crowdsourcing and memory

    Introduction

    Internet memory

    Collective memory

    Individual memory

    Memory and structure

    Generic crowd memory: shared methodological narratives

    Conclusion

    Chapter 9. Crowds past, present and future

    Three phases of crowdsourcing

    Some futures of crowdsourcing

    Conclusions

    Bibliography

    Index

    Series Page

    Chandos Information Professional Series

    Series Editor: Ruth Rikowski

    (email: Rikowskigr@aol.com)

    Chandos’ new series of books is aimed at the busy information professional. They have been specially commissioned to provide the reader with an authoritative view of current thinking. They are designed to provide easy-to-read and (most importantly) practical coverage of topics that are of interest to librarians and other information professionals. If you would like a full listing of current and forthcoming titles, please visit www.chandospublishing.com.

    New authors: we are always pleased to receive ideas for new titles; if you would like to write a book for Chandos, please contact Dr Glyn Jones on g.jones.2@elsevier.com or telephone +44 (0) 1865 843000.

    Copyright

    Chandos Publishing is an imprint of Elsevier

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, OX5 1GB, United Kingdom

    Copyright © 2018 Mark Hedges and Stuart Dunn. Published by Elsevier Ltd. All Rights Reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    ISBN: 978-0-08-100941-3

    For information on all Chandos Publishing publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Glyn Jones

    Acquisition Editor: Glyn Jones

    Editorial Project Manager: Charlotte Kent

    Production Project Manager: Joy Christel Neumarin Honest Thangiah

    Designer: Victoria Pearson

    Typeset by TNQ Books and Journals

    About the Authors

    Mark Hedges is a Senior Lecturer in the Department of Digital Humanities at King’s College London. His original academic background was in mathematics and philosophy, and he gained a PhD in mathematics at University College London, before starting a 17-year career in the software and systems consultancy industry, working on large-scale development projects for industrial and commercial clients. After a brief career break, he began his career at King’s at the Arts and Humanities Data Service, before moving to his current position, in which he has taught on a variety of modules in the MA in Digital Asset and Media Management and MA in Digital Curation. His research interests include digital curation and digital archives, their relationships with broader research environments and infrastructures, and digital and computational methods in the humanities. In particular, since 2012 he has been carrying out research on crowdsourcing and participatory methods in the humanities and their broader social and cultural impact.

    Stuart Dunn is Senior Lecturer in Digital Humanities at King’s College London. He gained his PhD in Aegean Bronze Age Archaeology from the University of Durham in 2002, during which he conducted fieldwork in Melos, Crete and Santorini. During his PhD and subsequently, he developed strong interests in digital research methods for mapping and spatial analysis. He worked as Research Assistant on the AHRC’s ICT in Arts and Humanities Research Programme from 2003 until 2006, where he supported the design and implementation of key research programmes. In 2006, he became a Research Associate at the Arts and Humanities e-Science Support Centre at King’s, and then a Research Fellow in the Centre for e-Research. Since 2011, he has taught in the fields of cultural heritage, digital history and, most recently, geographical information systems. In this period he has researched and published extensively on academic crowdsourcing as a method, especially where it touches on the field of volunteered geographic information. Dunn is a Fellow of the Higher Education Academy.

    Preface

    Crowdsourcing has become prominent in various branches of academia in the last 10 to 15 years, but it is not a new idea. The neologism itself dates from the mid-2000s, when the World Wide Web began to support processes of collaborative design and production across continents. However, academics, curators and scientists have been engaging with the ‘gifted amateur’ since at least the foundation of the Oxford English Dictionary in the 1880s, when educated members of the public supplied Oxford’s lexicographers with spellings, etymologies and definitions on a purely voluntary basis. The traditions of ‘citizen science’, in which those with the time, freedom and inclination to do so record structured information about the natural world – numbers of bird, plant or animal populations, for example – and pool them as part of a centrally organized process, is at least as old. And for at least as long, researchers have debated the utility of professionals – whatever that label means – engaging ‘the crowd’ in such structured activities, and the ability of a body of untrained and uninitiated people to make decisions. Charles McKay’s Extraordinary Popular Delusions and the Madness of Crowds surveyed the ‘National Delusions’, ‘Peculiar Follies’, and ‘Philosophical Delusions’ to which collective thinking was subject in 1841. In 1895, in his book The Crowd: A Study of the Popular Mind, Gustav Le Bon noted the ‘impulsiveness, irritability, incapacity to reason, the absence of judgement of the critical spirit, the exaggeration of sentiments’ evident in crowds. Such analyses implicitly reinforced the reassuring and reaffirming boundaries of professional academia, and the professional practice of key memory organizations such as museums and archives.

    By the late 2000s, it was clear that the Internet had driven citizen science into a new phase, and that the critiques of Le Bon and McKay required revisiting. Major projects, most notably, the Galaxy Zoo initiative, demonstrated the power of devolved and distributed participation by untrained amateurs in bringing forward scientific discoveries that would not otherwise have been possible. Whereas the natural world observations of the citizen scientists of earlier periods had grown and augmented datasets over time, the new wave of amateur astronomers (in the case of Galaxy Zoo) were processing and classifying existing image data in ways a computer could not, and thus exposing it to new forms of analysis. A few made new discoveries in their own right, by identifying features within the images.

    Researchers in the arts and humanities observed these changes. In a series of lateral connections, it was realized that there were also tasks in the acquisition and processing of digital humanities data that were beyond the ability of automated computer processes. The early 2010s saw researchers in humanities domains begin to import the methods and techniques of WWW-enabled citizen science for such tasks as – for example – the transcription of handwriting, exemplified by the Transcribe Bentham project. It was against this background that the authors of this book were awarded a grant by the UK Arts and Humanities Research Council (AHRC) to conduct a scoping study of crowdsourcing in the fields of arts and humanities. We sought to examine critically what methods, technologies and components of infrastructure researchers in the arts and humanities were co-opting from the ‘citizen sciences’, what were the best practices, what could be learned, and what participants in such emergent arts and humanities activities made of this new landscape. The result was the Crowdsourcing Scoping Study of 2012. At the heart of this was a typology of arts and humanities crowdsourcing methods, which sought to articulate the kinds of intellectual assets, or primary resources, that scholars in these domains were working with; the kinds of tasks they were undertaking in the course of participation; the processes these led to; and the outputs that emerged at the end. Much of what follows draws on and revisits this work, and seeks to put it in the context of what has happened in the wider world of citizen participation in the academic humanities since then. The most important developments include the emergence of the mobile web, and the ubiquity, in Western societies at least, of social media.

    We also expand on the 2012 report by exploring areas that were outside the remit of that project. Most notably, we expand on the origins of crowdsourcing as a WWW-based business model, and consider the implications of this for the production of academic knowledge, rather than outputs of monetary value. We look at the ethics of crowdsourcing, in particular in the light of concerns that have been raised that crowdsourcing is exploitative. Returning to the development of the WWW-driven participatory research that has emerged since 2012, we provide a set of thematic case studies that traces areas where changing technologies have had the greatest impact, and reflects on the changes that those technologies have brought about in the roles of individual participants/contributors. We also offer some reflections on the kinds of memory, both collective and individual, that academic crowdsourcing engenders.

    Our conclusion builds on the 2012 report by suggesting that three ‘waves’ can be perceived in academic crowdsourcing in the humanities. The first, emerging in the mid-2000s, is functional crowdsourcing, which mirrored the pre-WWW practices of citizen science where information resources were created or enhanced. The second, in the late 2000s and early 2010s, mirrored the conversational paradigms of Web 2.0; in this phase, contributors begin to communicate about their participation on web platforms, whether project-specific or using social media, leading to the development of various forms of community. In the third phase, which we term co-production, contributors begin to take a more proactive role in the design and construction of research outputs.

    Stuart Dunn, and Mark Hedges

    September 2017

    Acknowledgements

    This book is the result of a programme of work that the authors have been undertaking since 2012, and which still continues. We would like to acknowledge the support of the Arts and Humanities Research Council in the UK, which funded the initial research as part of a Crowdsourcing Scoping Study project. More recently, we carried out a series of expert interviews as part of a Foresight Study supported by the PARTHENOS project (http://www.parthenos-project.eu/), which was funded by the European Commission as part of the Horizon 2020 programme (Project No. 654119). We are grateful to all those who have shared their knowledge and experience with us during this research, whether in interviews, workshops or informal conversation.

    Chapter 1

    Introduction

    Academic crowdsourcing from the periphery to the centre

    Abstract

    This chapter gives a brief introduction to the volume, and an historical overview of academic crowdsourcing. Beginning with the definition of crowdsourcing which we first proposed in 2012, The origins of academic crowdsourcing are outlined, and we consider the arguments as to whether it can be said to have evolved from a means of producing digital resources to a research methodology. We discuss the basis of the research drawn upon in the following chapters, the material gathered and the methodologies used to interrogate it.

    Keywords

    Academia; Connectivity; Engagement; Methodology; Social media

    Introduction

    Crowdsourcing is the process of leveraging public participation in or contributions to projects and activities. It has become a familiar term, and a concept that has gained increasing attention in many spheres over the last decade. Government, industry, and commercial enterprises are developing crowdsourcing practices as a means to engage their audiences and readerships, to improve and enrich their own data assets and services, and to address supposed disconnects between the public and professional sectors (Boudreau & Lakhani, 2013). At a time when the Web is simultaneously transforming the way in which people collaborate and communicate, and merging the spaces that the academic and nonacademic communities inhabit, it has never been more important to consider the role that public communities – connected or otherwise – have come to play in academic humanities research. Public involvement in the humanities can take many forms – transcribing handwritten text into digital form; tagging photographs to facilitate discovery and preservation; entering structured or semi-structured data; commenting on content or participating in discussions; or recording one’s own experiences and memories in the form of oral history – and the relationship between the public and the humanities is convoluted and poorly understood.

    This book explores this diverse field, and focuses on crowdsourcing as a research method. We consider where, in purely semantic terms, the boundaries of what is considered to be academic crowdsourcing should lie. Since humanities crowdsourcing is at an emergent stage as a research method, there is a correspondingly emergent field of academic literature dealing with its application and outcomes, which allows some assessments to be made about its potential to produce academically credible knowledge. The problematization of method, academic credibility, value and knowledge outputs is familiar from the history of the Digital Humanities. In 2002, Short and McCarty proposed a ‘methodological commons’, common ways of doing things that linked subject areas, digital research methods and domains: ‘computational techniques shared among the disciplines of the humanities and closely related social sciences, e.g., database design, text analysis, numerical analysis, imaging, music information retrieval, communications’ (McCarty, 2003). In McCarty’s terms, this commons formed a combination of ‘collegial service’ and ‘research enterprise’ which both made provision for existing research activities, and expanded them. The principal purpose of this book is to develop a similar ‘methodological commons’ for academic crowdsourcing. We contend that just as (say) the application of text processing technologies in history enhances the study of history and provokes new questions about the past, and can inform the development of processing technologies for (again, say) music; so can methods of leveraging public participation in museums form and relate to participation elsewhere in the humanities. What is needed is a demarcation of the kinds of material involved, the ‘assets’, the types of task available to the public, the processes that undertaking those tasks involve, and the outputs. In other words, we seek to apply to crowdsourcing in academia the kind of formal structure of value and review that crowdsourcing has implicitly acquired in many other domains.

    Academia, however, has always been something of a special case. It is worth spending some time reflecting on why this is so. Long before crowdsourcing was ever known by that name, researchers in especially the natural sciences were engaging in ‘citizen science’, a set of practices in which unpaid volunteers provided input to professionally coordinated research projects. This has been going on in domains such as field ecology, conservation and habitat studies since at least the 17th century, when in any case the role of professional scientist did not exist, at least in its 21st century form (Miller-Rushing, Primack, & Bonney, 2012). Networks, collaborations and codependencies developed within and across professional boundaries, leading to the production of original knowledge that passed all the thresholds of academic peer review and credibility.

    The most significant changes to these networks and collaborations can be traced to the mid- and late 2000s. The Galaxy Zoo project, for example, one of the largest and most successful citizen science projects, and to which we return later, was launched on July 11, 2007, with the Zooniverse suite of collaborations coming 2  years later. Shortly before this, in 2006, Jeffrey Howe coined the term ‘crowdsourcing’ in a now-famous article in Wired. In this, he stated:

    ‘All these companies grew up in the Internet age and were designed to take advantage of the networked world. It doesn’t matter where the laborers are—they might be down the block, they might be in Indonesia—as long as they are connected to the network … technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. … The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.’

    Howe (2006)

    This definition and its timing are critical to the thesis of this book. The year 2006 was a period when the World Wide Web was becoming ubiquitous and Hypertext was established as its main medium, and it was the time when social media and the interactive Web started to emerge. Twitter was launched in 2006, Facebook in 2008. The emergence of increasingly fluid digital networks of communication spawned crowdsourcing as both a term and a concept, and brought a range of challenges and opportunities to an academic environment already familiar with the traditions of citizen science. The Galaxy Zoo project was an early adopter, using the affordances of the Internet to engage the public in the task of classifying images of galaxies from the Sloan Digital Sky Survey – a job that is straightforward for the human eye, but impossible for even the most sophisticated automated image processing – with now-legendary success (Bamford et al., 2008). Early crowdsourcing projects in the humanities (such as Transcribe Bentham – see Chapter 3) engaged with the concept of crowdsourcing to operationalize in a similar manner tasks of a larger size and scale than was previously possible using unpaid labour, such as mass transcription tasks. Between the mid-2000s and the present day, this paradigm of academic crowdsourcing underwent a shift in perception. It is now acknowledged that it is not a ‘cheap’ alternative to paid-for labour, as suggested by the

    Enjoying the preview?
    Page 1 of 1