Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Traceable Human Experiment Design Research: Theoretical Model and Practical Guide
Traceable Human Experiment Design Research: Theoretical Model and Practical Guide
Traceable Human Experiment Design Research: Theoretical Model and Practical Guide
Ebook421 pages4 hours

Traceable Human Experiment Design Research: Theoretical Model and Practical Guide

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The aim of this book is to describe the methodology of conducting the THEDRE research "Traceable Human Experiment Design Research". It applies to Research in Human Centered Informatics (RICH). These are areas of computer research that integrate users to build scientific knowledge and supporting tools for this research. As an example, we can mention the relevant fields such as Information Systems (IS), Human Machine Interfaces (HMI) Engineering, and Human Information Systems (HIA). The construction of this language and method is based on experiments conducted since 2008 in the field of RICH.

LanguageEnglish
PublisherWiley
Release dateFeb 14, 2018
ISBN9781119510475
Traceable Human Experiment Design Research: Theoretical Model and Practical Guide

Related to Traceable Human Experiment Design Research

Related ebooks

Information Technology For You

View More

Related articles

Reviews for Traceable Human Experiment Design Research

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Traceable Human Experiment Design Research - Nadine Mandran

    Preface

    This book proposes a research methodology named THEDRE, which stands for Traceable Human Experiment Design Research, which is a result of testing carried out in human-centered computer science. Since 2008, approximately 50 tests have been monitored, enabling us to understand the best ways to efficiently carry out testing within the scope of scientific research. In order to perform the methodological work correctly, we have referred to the work of epistemologists and quality managers and studied the research methods currently used in computer science.

    We begin with an introduction to the central problem that we shall share our thoughts on. This book is organized into six chapters. Chapter 1 defines the characteristics of human-centered computer science research (HCCSR). Chapter 2 presents the theoretical notions required to develop our research method (i.e. epistemological paradigms, quality processes, data production methods and user-centered approaches). Chapter 3 examines current research methods used in human-centered computer science research. Chapter 4 examines the THEDRE theoretical model. Chapter 5 focuses on the implementation of the THEDRE model as well as on the practical guidelines designed to coach researchers throughout the research process. Finally, Chapter 6 discusses the way in which THEDRE was built and evaluated on the basis of testing carried out from 2008 onward.

    The characteristics discussed in Chapter 1 are crucial to the understanding of the THEDRE model. Chapter 2 examines the theoretical foundations of the tools we use. Similarly, in Chapter 3, we discuss the current methods used in order to provide an overview of the state of affairs in this field. However, these two chapters are not essential to the understanding of the method explained in Chapters 4 and 5.

    In order to gain a swift understanding of the THEDRE method, we advise readers to begin with the Introduction and Chapter 1, before going directly to Chapters 4 and 5. Chapters 2 and 3 go into further detail (where necessary) on concepts used in THEDRE.

    Nadine MANDRAN

    December 2017

    Introduction

    Conducting research is a specialist profession because it requires precise knowledge of a field as well as skills in experimental methodology. On this thread, Claude Bernard (1813–1878) stated the following:

    A true scholar embraces both theory and experimental practice. They state a fact; an idea is formed in their mind relating to this fact; they reason it, establish an experience, foresee and form it within material conditions. New processes result from this experience which must observed, and so forth.

    Experimentation skills are not systematically held by young researchers, and they often find themselves in difficulty when experimental steps must be developed. Experimental processes are trickier to implement when it comes to studying humans, particularly when also considering the context within which they live. This investigation is all the more difficult than it is necessary to mobilize methods of Humanities and Social Sciences (HSS).

    As such, this problem was identified within research on technology, which required finding users able to develop and evaluate scientific knowledge. Users are defined as people who are mobilized by the researcher and upon whom the researcher may choose to build an activity model, for example. They are the end-users of applications produced by research work. This type of research is therefore faced with the integration of humans and their environments (family, professional, etc.). It is referred to as human-centered computer science research (HCCSR).

    We have worked with PhD students since 2008 in a bid to accompany them in the development of these multidisciplinary experimental protocols, giving them the tools they need to address the problem in their theses. We co-supervised work at the Laboratoire d’Informatique de Grenoble (LIG, Grenoble Informatics Laboratory), in other laboratories (Cristal Lille, LIP6 Paris, IFE Lyon, Saint Etienne Department of Geography at the Université Jean Monnet) and in two companies during the follow-up of CIFRE theses. We have followed up a total of 29 PhD studies and six research projects at the date of publication.

    Within the framework of these studies and projects, five specialist areas concerning HCCSR have been defined: (1) engineering of human–machine interfaces (HCI), (2) information systems (IS), (3) technology-enhanced learning (TEL), (4) multi-agent system engineering (MAS) and (5) geomatics (GEO). The research objectives of these specialist areas as well as their applicable tools differ. This being said, some common points have been identified: (1) the need to integrate the user and their context at certain points in the process, and/or to develop or evaluate the object, (2) the need to develop a tool so that user testing can be carried out and (3) the need to develop the above in an iterative manner in order to encourage evolution of both tools and research. Over the course of this work, we also identified a lack of best practice concerning the traceability of the various steps monitored in order to formulate their research work. Traceability plays an important role as it guarantees a certain level of repeatability of results in the field of HCCSR. The notion of traceability of research corresponds to the capitalization of the completed actions, data and documents produced and the obtained results. According to Collins, the verb to capitalize is defined as using something to gain some advantage for yourself. As such, capitalization does not simply involve archiving, but rather the containment of a set of functions such as storage, accessibility, availability, relevance and reuse, in order to produce benefits and new abilities. The above definition forms the basis of this book in which capitalization is examined.

    The challenge of integrating humans into the experimental process of HCCSR as well as into the traceability of this type of research may seem surprising because not only have HCCSR methods been formalized, but also an abundance of literature concerning data production methods is available [CRE 13] and engineering methods concerning computer science are taught [MAR 03]. However, work carried out by researchers showed that this specialist activity is difficult to acquire, especially for experiments that require a human component in order to develop and evaluate scientific knowledge with technical content.

    This book aims to provide a response to this problem via the THEDRE approach. It is intended to support PhD students and researchers in their research work by focusing on experimental aspects within a multidisciplinary context, and to provide them with the tools required to ensure their work is traceable. THEDRE also aims to develop knowledge of experimental HCCSR practices among young PhD students, in order to respond to emerging research and to link this work with quality management tools.

    THEDRE is a global approach that encourages the use of quality management tools, namely the Deming cycle and quality indicators for the research process and for data quality. THEDRE is formalized using a vocabulary designed to structure the research process. This vocabulary enables each researcher to refer to this process while also adapting it to their specialist field (e.g. HCI engineering, IS engineering and TEL). The research process developed by the researcher enables them to monitor research projects and to support PhD students by applying quality management principles to the process.

    Before describing our approach, we will first examine the elements produced by HCCSR as well as their characteristics in order to discover the way in which they are developed and evaluated. We have classified this research as science of the artificial [SIM 04].

    1

    Human-Centered Computer Science Research (HCCSR)

    1.1. Concepts and features of HCCSR

    Introducing the user is one of the principle characteristics of HCCSR. The second fundamental characteristic of this type of research is its dual purpose. On the one hand, it focuses on the production of scientific knowledge, and on the other hand, it looks at tools to support human activity (e.g. language, dictionary, interface, model). These two focus points are completely intertwined and interdependent. As such, professional expertise can be modeled using a language (e.g. a UML extension), with the resulting model contributing to the design of a computer application. This language constitutes scientific knowledge. The computer application is a tool in the sense that the user is able to use it to perform an activity (such as developing a specific information system or providing a new interactive tool). This tool is dependent on the scientific knowledge produced, and users can use it to produce new scientific knowledge. For example, modeling a professional task, such as taking into account execution time, requires the modeling language to be developed.

    First, we will provide clarification for two terms in order to avoid confusion when reading this book.

    Methodology is a branch of logic used to study the approaches of various sciences. A set of rules and processes that are used to lead research [TLF 16]. The etymology of methodology comes from Latin, borrowed from Ancient Greek: the pursuit or search of a pathway and logos which signifies knowledge disciplines. Methodology is therefore the study of methods to create new ones and to help them to evolve.

    The method is the result of work carried out in methodology. The definition that we follow is: A way of doing something in following a certain habit, according to a certain design or with a certain application [TLF 16]. The term approach requires certification. In this book, we will discuss research methods, experimental methods, data production methods and data analysis methods.

    We will also define six terms in order to clarify the terminology used as part of HCCSR.

    Scientific knowledge in the context of HCCSR: this represents the production of research. It is developed on the basis of prior knowledge. The construction of new knowledge brings an added value to previous scientific knowledge. This added value will be evaluated during testing phases. This takes different forms within HCCSR, such as model, concept, language and application. For example, Ceret [CER 14] produced a new process model for HCI design. We will refine the definition of scientific knowledge within HCCSR by positioning it within an epistemological paradigm (section 4.2). The epistemological paradigm corresponds to the way in which scientific knowledge is built and evaluated, with or without taking into account humans and their context.

    Activatable tool: this represents the scientific knowledge in a form that can be accessed by the user. The activatable tool is the medium between the user and scientific knowledge. If it is supported by a technique (such as an application), then it is dynamic. If the tool exists but is not supported by a technical device, then it is static. In practice, it takes the form of a dictionary of concepts designed to support the development of a conceptual model. Terms and definitions from this dictionary are presented to users with the aim of enabling them to share their opinion on proposed terms. It may also take the form of a paper model used to observe the primary reactions of a user, as well as a computer application in beta that the researcher wishes to improve and/or where the user is part of the difficulties encountered during testing. For information systems (IS), it may take the form of symbols to represent concepts designed and validated by users. During testing, the activatable tool is built, improved and evaluated. In some cases, the activatable tool can be split into subparts referred to as activatable components.

    Activatable components are the various parts of the activatable tool: these parts form a whole but they can be separated from each other, enabling them to be developed and evaluated along with the user. The components themselves are activatable tools in the sense that they can be used by the user. Activatable components are built and evaluated independently from each other, both with and without users. The composition of components forms a whole, this being the activatable tool. For example, a geomatics application [SAI 16] designed for SNCF officials responsible for railway maintenance is composed of terminology specific to this profession, data organization, features relevant to the officials, a key of the symbols used and a human–machine interface. The various activatable components forming the activatable tool as well as their development progress must be identified in order to build and evaluate them during the testing phase. This breakdown brings maximal precision to what needs to be developed or tested. It also enables experimental objectives to be accurately defined.

    Instrument composed of scientific knowledge and the activatable tool: as a general rule, scientific knowledge and the activatable tool are intertwined and interdependent. Aboulafia [ABO 91] specifies the complementary relationship between artifact and theory: who argues that truth (justified theory) and utility (artifacts that are effective) are two sides of the same coin and that scientific research should be evaluated in light of its practical implications.

    Testing: it is a research step for collecting and analyzing field data in order to develop and evaluate a research instrument. More specifically, testing will enable the activatable tool to be developed and evaluated for the instantiation of scientific knowledge. This step can serve to mobilize the user from their perspective (on-site) in order to collect their representations of the known world. The user can also be studied remotely (in a laboratory). Testing also enables technical features of the activatable tool to be tested without necessarily requiring input from the user (e.g. performance or speed testing). A number of tests are carried out within the framework of HCCSR in order to develop and evaluate the instrument. An experimenter is a person who manages in situ testing with the user. This field management is referred to as experiment management.

    To illustrate these terms, we will refer to the DOP8 model proposed by Mandran et al. [MAN 15] by using an example. The purpose of this model is to define concepts and their relationships to support developers in the development of data analysis platforms that combine three features: (1) data production (light gray part of the graph), (2) production of data analysis operators (dark gray part) and (3) data analysis (black part), i.e. the implementation of data operators to produce results that can be interpreted. The end-user of this type of platform is not an expert in data analysis. For example, in terms of data production, a teacher collects their pupils’ marks in Mathematics and French. In terms of operator production, a developer asks an operator to calculate the level of pupil success. For analysis, teachers link the operator to the data produced. To do this, they need an environment in which they can link data to operators and produce results. The DOP8 model formalizes the following three concepts: instrument, scientific knowledge (see Figure 1.1, right) and activatable tool and components (see Figure 1.1, left).

    Figure 1.1. Illustration of concepts applied to the DOP8 model: instrument, scientific knowledge and activatable tool and components

    To build the DOP8 model, data analysis experts were observed in order to build a model and a tool accessible to non-experts. They were observed while carrying out work in the field. Following this, an expert-tested activatable tool was built in beta. It was later improved and tested by non-experts. Today, this activatable tool takes the form of a website¹ composed of two activatable components: terminology and a set of functions (see Figure 1.1, right). The Undertracks [UND 14a, UND 14b] website is one of the possible instantiations of the DOP8 model. The research instrument contains the DOP8 model and its instantiation in the form of a website.

    As such, HCCSR is characterized by research whose goal is to produce an instrument that combines scientific knowledge and an activatable tool. In order to develop these tools, users and their contexts are integrated into the research process. The activatable tool acts as the medium between the user and scientific knowledge. In order to engage users in the research process, testing is carried out with the aim of producing data. Analysis of the latter facilitates the development of both scientific knowledge and the activatable tool. HCCSR is therefore research in which the instrument is composed of scientific knowledge and an activatable tool (link symbol) (see Figure 1.2). The researcher calls upon users during iterative testing (see the cycle symbol in the diagram) in order to build and evaluate scientific knowledge and the activatable tool. The activatable tool is created by the researcher using human observation; in return, this activatable tool facilitates a better understanding of humans and scientific knowledge in HCCSR. This duality is characteristic of the science of the artificial, which will be addressed in the next section.

    Figure 1.2. Features of the HCCSR composed of scientific knowledge, linked to an activatable tool (link symbol) and built using successive iterations (loop symbol)

    1.2. HCCSR: science of the artificial

    With respect to computer science, J.L. Le Moigne notes that to be understood, systems must first be built, then their behaviors observed. The development of artificial objects is required for research development. He adds that theoretical analysis must be accompanied with a lot of experimental work. As such, artificial objects must be developed along with the user and their context. Following this, they should be tested with this same user.

    To clarify the specifics of objects designed using sciences of the artificial, Simon [SIM 04] uses the example of a clock. It was designed with the intention of giving the time; it can be described using physical elements (e.g. cog wheels), properties (e.g. forces generated by springs) and an operating environment (e.g. cutting hours, the place of use). As such, the design of an artificial object involves multiple elements intention, characteristics of the artificial object (i.e. properties and physical elements) and the environment in which it is implemented. The artificial object² can be seen perceived as an interface between an ‘internal’ environment, the substance and organization of the artifact itself, and an ‘external’ environment, the surroundings in which it is implemented.

    Addressing artificial objects, Simon [SIM 04, p. 31] offers the frontiers of sciences of the artificial:

    Proposal 1: Artificial objects are synthesized by humans, although not always within the scope of a clear or forward-facing vision.

    Proposal 2: Artificial objects can mimic the appearance of natural objects, although they lack the reality of the natural object in one or more aspects.

    Proposal 3: Artificial objects can be characterized in terms of functions, goals and adaptation.

    Proposal 4: Artificial objects are considered in both imperative and descriptive terms, particularly during their design.

    From our point of view, an artificial object proposed by HCCSR meets these characteristics for the following reasons:

    – the final version of the object is not always known at the start of the development process, and the various steps constantly change its condition, causing it to develop in line with user needs and contexts. It is because a true forward-facing vision does not exist (Proposal 1);

    – because the vision is not necessarily clear, building the object requires several consultations with users during the building, development and evaluation of the object. A number of iterative testing phases are involved (Proposal 1);

    – it is built to meet an intention (Proposal 3) (e.g. teaching surgery using a simulator);

    – in order to be operational, this object attempts to meet the needs of users in a given context (Proposal 4) (e.g. a simulator used for surgery will be useful for teaching interns);

    – this object resembles a natural object in the sense that it will replace certain human-activated tasks (Proposal 2) (e.g. using a haptic arm with force feedback in order to carry out the operation using a simulator).

    As such, from the four proposals made by J.L. Le Moigne, HCCSR can be defined as artificial intelligence: scientific knowledge is built by referring to user behaviors and practices in order to design objects that address purposes. These objects can be used in a given context. The use of these objects is involved in refining the understanding of behaviors and the development/improvement practices. In turn, these developments enable progress in scientific knowledge. As a result, this is an iterative process.

    In conclusion, HCCSR is a science of the artificial that produces scientific knowledge in conjunction with the activatable tool. These productions are constructed iteratively along with users. The activatable tool acts as the medium between the user and scientific knowledge. It is within the context of artificial intelligence that our research method is anchored.

    The next section examines the difficulties related to the evaluation of scientific knowledge in HCCSR.

    1.3. Difficulties in building and evaluating HCCSR instruments

    Examining research methodologies for building and evaluating HCCSR instruments is complex for various reasons. This work should include a multidisciplinary dimension and a transverse dimension. Such works are multidisciplinary in the sense that they are concerned with problems linked to computer science that require the use of Humanities and Social Sciences (HSS) approaches. They are transverse because the problem is present in various specialist fields of research in computer science. As such, this has enabled us to observe the problem within the five specialist areas mentioned previously: HCI, TEL, IS, MAS and GEO. Multidisciplinarity and transversality are the primary complex elements of the problem.

    During the creation of this test work, human-centered computer science researchers face the following challenges:

    – The complexity of the field to be investigated: humans in ecological situations: research led with the aim of building the instrument sits within a global context. On the one hand, the testing strategy is

    Enjoying the preview?
    Page 1 of 1