```liquid {% seo %} ```

  Qualitative Research in Technology-Enhanced Learning


Authors:
Maria Victoria Soule, Cyprus University of Technology
Anna Nicolaou, Cyprus University of Technology
Antigoni Parmaxi, Cyprus University of Technology
Laia Albo, Universitat Pompeu Fabra

Abstract: This Qualitative Research chapter will mainly introduce the reader to Qualitative Research and Computer Assisted Qualitative Data Analysis (CAQDAS) through the use of NVivo, a software program used for qualitative and mixed-methods research. The chapter consists of two main parts. Part one starts with a general theoretical introduction on Qualitative Research, where a brief introduction to Qualitative Research theory is provided, as well as key considerations of Qualitative Data Analysis (QDA). The second part focus on how CAQDAS can support QDA. Here the reader is also guided on how to work with and code qualitative data with NVivo to enter and code raw data, organise coding, or cases, write memos and explore data.

Qualitative Research

“Qualitative research is an umbrella term for a wide variety of approaches to and methods for the study of natural social life. The information or data collected and analysed is primarily (but not exclusively) nonquantitative in character, consisting of textual materials such as interview transcripts, fieldnotes, and documents, and/or visual materials such as artifacts, photographs, video recordings, and Internet sites, that document human experiences about others and/or one’s self in social action and reflective states” (missing reference)

Analytic Approaches

According to Creswell (Creswell, 2007) Qualitative studies have a baffling number of choices of approaches:

Table 1. Qualitative Approaches Mentioned by Authors (Adapted from Creswell (Creswell, 2007))

Authors Qualitative approaches Disciplin/Field
Lancy (Lancy, 1993) Anthropological Perspective  
  Sociological Perspectives  
  Biological Perspectives  
  Case Studies Education
  Personal Accounts  
  Cognitive Studies  
  Historical Inquiries  
Denzin & Lincoln (Denzin & Lincoln, 2005) Performance, Critical,  
  and Public Ethnography  
  Interpretive Practices  
  Case Studies  
  Grounded Theory Social Sciences
  Life History  
  Narrative Authority  
  Participatory Action Research  
  Clinical Research  
Corbin & Strauss (Corbin & and Strauss, 2018) Grounded Theory  
  Ethnography Sociology
  Phenomenology  
  Life Histories  
  Conversational Analysis  

An overview of five approaches

Saldaña (missing reference) acknowledges the diversity of classifications or typologies in qualitative research and provides an overview of five most common approaches in which researchers across multiple disciplines work.

Ethnography

Ethnography is the observation and documentation of social life in order to render an account of a group’s culture. Ethnography refers to both the process of long-term fieldwork and the final (most often) written product. Originally the method of anthropologists studying foreign peoples, ethnography is now multidisciplinary in its applications to explore cultures in classrooms, urban streets, organizations, and even cyberspace.

Grounded Theory

Grounded Theory is a methodology for meticulously analysing qualitative data in order to understand human processes and to construct theory, that is, theory grounded in the data or constructed from the ground up. The originators of the methodology were Anselm Strauss and Barney Glasser, sociologists who in the 1960s studied illness and dying. Grounded theory is an analytic process of constantly comparing small data units, primarily but not exclusively collected from interviews, through a series of cumulative coding cycles to achieve abstraction and a range of dimensions to the emergent categories’ properties. Classic grounded theory works toward achieving a core or central category that conceptually represents what a study is all about. This core or central category becomes the foundation for generating a theory about the process observed.

Phenomenology

Phenomenology is the study of the nature and meaning of things. The roots of this approach lie in philosophy’s early hermeneutic analysis, or the interpretation of texts for core meanings. Today, phenomenology is most often a research approach that focuses on concepts, events, or the lived experiences of humans. Some qualitative research studies take a phenomenological approach when the purpose is to come to an initiative awareness and deep understanding of how humans experience something. There are no specific methods for gathering data to develop a phenomenological analysis; interviews, participant observation, and even literary fiction, provide ample material for review. The primary task is researcher reflection on the data to capture the essence and essentials of the experience that make it what it is.

Case Study

A case study focuses on a single unit for analysis (one person, one group, one event, etc.). It involves the study of an issue explored through one or more cases within a bounded system. The data collection in case study research is typically extensive, drawing on multiple sources of information, such as observations, interviews, documents, and audiovisual materials. The type of analysis of these data can be a holistic analysis of the entire case or an embedded analysis of a specific aspect of the case. Through this data collection, a detailed description of the case emerges. After the description, the researcher might focus on a few key issues (or analysis of themes), not for generalising beyond the case, but for understanding the complexity of the case. In the final interpretative phase, the researcher reports the meaning of the case.

Content Analysis

Content analysis is the systematic examination of texts and visuals (e.g. newspapers, magazines, speech transcripts), media (e.g. films, television episodes, Internet sites), and/or material culture (e.g. artifacts, commercial products) to analyse their prominent manifest and latent meaning. A manifest meaning is one that is surface and apparent, and a latent meaning is one that is suggestive, connotative, and subtextual. Some content analyses are both qualitative and quantitative in their design, since statistical frequency of occurrence becomes one important measure of salient themes, especially in texts and media.

Ethics in Qualitative Research

Social research — including research in education in technology enhanced learning— concerns people’s lives in the social world, and therefore it inevitably involves ethical issues. Qualitative research often intrudes more into the human private sphere: it is inherently interested in people’s personal view and often targets sensitive matters. In order to conduct good practice in research some academic societies have formulated their codes of ethics (see, for instance, The American Sociological Association Code of Ethics, or The Statement of Ethical Practices of the British Sociological Association). Flick (Flick, 2018) proposes the following basic ethical principles when conducting qualitative research:

  • Informed consent means that no one should be involved in research as a participant without knowing about this and without having the chance to refuse to take part.
  • Deception of research participants (by covert observation or by giving them false information about the purpose of research) should be avoided.
  • Participants’ privacy should be respected and confidentiality should be guaranteed and maintained.
  • Accuracy of the data and their interpretation should be the leading principle, which means that no omission or fraud with the collection or analysis of data should occur in the research practice.
  • In relation to the participants, respect for the person is seen as essential.
  • Beneficence, which means considering the well-being of the participants.
  • Justice, which addresses the relation between benefits and burdens for research participants.

Data collection methods

Qualitative data can come from different sources. This is usually transformed into textual form resulting in hundreds of pages of transcripts and field notes. Marshall and Rossman (Marshall & Rossman, 2006) suggest four primary methods for gathering information: (1) participating in the setting, (2) observing directly, (3) interviewing in depth, and (4) analysing documents and material culture.

Participating in the setting

Participant observation was developed primarily from cultural anthropology and qualitative sociology. It is an overall approach to inquiry and a data-gathering method which demands first-hand involvement in the social world chosen for study. Immersion in the setting permits the researcher to hear, to see, and to begin to experience reality as the participants do. Personal reflections are integral to the emerging analysis of a cultural group.

Observation

Observation entails the systematic noting and recording of events, behaviours, and artefacts in the social setting chosen for study. The observational record is frequently referred to as ‘field notes’. For studies relying on observation, the researcher makes no special effort to have a particular role in the setting. Classroom studies are one example of observation in which the researcher documents and describes actions and interaction that are complex. Observation can range from a highly structured, detailed notation of behaviour structured by checklists to a more holistic description of events and behaviour.

In-Depth Interviewing

With in-depth interviews, the researcher explores a few general topics to help uncover the participant’s views. There are different types of interviews: one-to-one interviews, focus group interviews, and retrospective interviews.

One-to-one interviews, also called ‘professional conversation’ has a structure and a purpose to obtain descriptions of the life of the interviewee with respect to interpreting the meaning of a described phenomenon. One-to-one interviews can occur in single or multiple sessions, and can be structured, unstructured or semi-structured.

Focus group interviews involve a group format (usually 2-12 members). This format is based on the collective experience of group brainstorming, that is, participants thinking together, inspiring and challenging each other, and reacting to the emerging issues and points. The format also allows for various degrees of structure.

Retrospective interviews come under the umbrella term of ‘introspective methods’. The data generated by this methodology is called a ‘verbal report’ or ‘verbal protocol’. The retrospective interviews, as the name suggests, happen after the task/process has been completed. In this case, the relevant information needs to be retrieved from long-term memory and thus the validity of retrospective protocols depends on the time interval between the occurrence of a thought and its verbal report.

Review of documents

Researchers supplement participant observation, observation and interviewing in gathering and analysing documents produced in the course of everyday events or constructed specifically for the research at hand. Logs, announcements, formal policy statements, websites, emails, and so on are all useful in developing an understanding of the setting or group studied. Research journals can also be quite informative. The use of documents often entails a specialized analytic approach: content analysis.

Qualitative Data Analysis (QDA)

In this video by EATEL, we provide a brief introduction to Qualitative Research theory and key considerations of Qualitative Data Analysis.

Coding

Coding is a way of analysing that can be applied to all sorts of data and is not focused on a specific method of data collection. Coding is not the only way of analysing data, but it is the most prominent one, if the data results from observations, interviews, or review of documents. Dörney points out that:

“Most research texts would confirm that regardless of the specific methodology followed, qualitative data analysis invariably starts with coding.” (missing reference). But, he also recognizes that this statement is partially true, because usually a considerable amount of analysis has already taken place when we begin the actual coding process. This first stage, often called ‘pre-coding’, involves reading and reading the texts (i.e. transcripts, documents, etc.), reflecting on them, and noting down our thoughts in journal entries and memos. These pre-coding reflections shape our thinking about data and influence the way we will go about coding.

In this video by DelveTool, an introduction to Qualitative Coding is presented.

Initial coding

How shall we begin coding? The codes you develop may be influenced by a number of factors (Silver & Lewins, 2008):

  • Research aims
  • Methodology and analytic approach
  • Amount, kinds and sources of data
  • Level and depth of analysis
  • Constraints
  • Research audience

There are different approaches to coding:

Table 1. Approaches to coding

Corbin & Strauss (Corbin & and Strauss, 2018) Richards (Richards, 2005) Layder (Layder, 1998) Mason (Mason, 2002) Miles & Huberman (Miles & Huberman, 1994) Seidel (Seidel, 1998)
Open Descriptive Provisional Literal Descriptive Objectivistic
Axial Topic Core Interpretive Interpreting Heuristic
Selective Analytical Satelite code Reflexive Pattern coding  

Codes can be generated inductively or deductively. The inductive approach begins with a set of empirical observations, seeking patterns in those observations, and then theorizing about those patterns, that is, from salient aspects identified in the data. The deductive approach begins with a theory, developing hypotheses from that theory, and then collecting and analysing data to test those hypotheses, that is, according to predefined areas of interest.

Second-level coding

Qualitative analytical methods contain a second-level coding process because in most investigations we want to go beyond a mere descriptive labelling of the relevant data segments. A useful process in second-level coding is to produce a hierarchy of codes, in the form of a tree diagram.

Using a template of codes

Dörney (missing reference) describes the use of a template of codes as a variation on the standard coding procedures because its use does not emphasize the emergent nature of the codes but, to the contrary, starts with a template. The first of data analysis involves preparing a code manual and the texts are coded using this predetermined template. The template method can only be used if there is sufficient background information on the topic to be able to define the template categories.

Memoing

Throughout the analytic process, researchers are encouraged to write reflective memos and notes. Marshall and Rossman (Marshall & Rossman, 2006) suggest writing early and throughout the research process, but especially during more focused analysis.

Themes

Themes are also an important concept in QDA. Corbin and Strauss (Corbin & and Strauss, 2018) define them as thread or ideas that emerge from a text (i.e. an interview). While analysing one of their memos they explain:

“These [themes] have already been identified as concepts, but at this time they will be elevated to the status of category/theme not only because they seem to run throughout the entire interview but also because they seem to be able to pull together some of the lesser concepts” (Corbin & and Strauss, 2018)

Approaches to qualitative data analysis

Cohen, Manion and Morrison (Cohen et al., 2018) highlight that “qualitative data analysis involves organizing, accounting for and explaining the data; in short, making sense of data in terms of the participants’ definitions of the situation, noting patterns, themes, categories and regularities”. This section demonstrates several forms of qualitative data analysis and focuses more specifically on content analysis approaches. Several researchers note the need of a qualitative researcher to abide by the issue of “fitness for purpose” (Cohen et al., 2018), that is the researcher can set off to describe, summarize, interpret, prove, demonstrate etc. It is significant for the researcher to decide the purpose since this decision will determine the kind of analysis performed on the data, whilst this will affect the way the analysis is written up. The data analysis will also be influenced by the kind of qualitative study that is being undertaken. For example, a case study may be most suitably written chronologically as descriptive narrative, whilst a grounded theory and content analysis will proceed through a systematic series of analyses, including coding and categorization, until theory emerges that explains the phenomena being studied.

Five ways of organising and presenting data analysis (Cohen et al., 2018)

  1. By groups: this way recommends organizing and presenting data by respondents, in response to particular issues. This method is often used when a single-instrument approach is adopted.
  2. By individuals: this way the total responses of a single participant are presented, and then the analysis moves on to the next individual. In this method, the researcher often has to proceed to a second level of analysis in order to look for themes, shared responses, patterns in order to summarize the data.
  3. By issue: in this way, the data is organized and presented depending on their relevance to a particular issue. This way is characterized as economical, yet it misses the wholeness and integrity of each individual respondent, whilst data can become decontextualized.
  4. By research question: all the relevant data from various data streams are collated to provide a collective answer to a research question.
  5. By instrument: in this way, the findings of each instrument are presented. While this approach retains fidelity to the coherence of the instrument and enables the reader to see clearly which data derive from which instrument, one has to observe that the instrument is often only a means to an end, and that further analysis will be required to analyse the content of the responses – by issue and by people.

According to Cohen, Manion and Morrison (Cohen et al., 2018), the following are considered generalized stages of analysis:

  1. Generating natural units of meaning
  2. Classifying, categorizing and ordering these units of meaning
  3. Structuring narratives to describe the contents
  4. Interpreting the data

Methodological tools for analysing qualitative data

There are several procedural tools for analysing qualitative data. We focus here on two such tools, analytic induction and constant comparison.

Analytic induction

Analytic induction is a process in which data are scanned to generate categories of phenomena, relationships between these categories are sought and working typologies and summaries are written on the basis of the data examined. The strategy under this approach involves the following according to Bogdan and Biklen (missing reference):

  • In the early stages of the research, a rough definition and explanation of the particular phenomenon is developed.
  • This definition and explanation is examined in the light of the data that are being collected during the research.
  • If the definition and/or explanation that have been generated need modification in the light of new data (e.g. if the data do not fit the explanation or definition) then this is undertaken.
  • A deliberate attempt is made to find cases that may not fit into the explanation or definition.
  • The process of redefinition and reformulation is repeated until the explanation is reached that embraces all the data, and until a generalized relationship has been established, which will also embrace the negative cases.

Constant comparison

In constant comparison the researcher compares newly acquired data with existing data and categories and theories that have been devised and which are emerging, in order to achieve a perfect fit between these and the data (Cohen et al., 2018).

Content analysis

Content analysis is often defined as a process in which “many words of texts are classified into much fewer categories” (Weber, 1990). Cohen, Manion and Morrison (Cohen et al., 2018) define content analysis as “a strict and systematic set of procedures for the rigorous analysis, examination and verification of the contents of written data”.

How does content analysis work?

Content analysis involves “coding, categorizing (creating meaningful categories into which the units of analysis – words, phrases, sentences etc. – can be placed), comparing (categories and making links between them), and concluding – drawing theoretical conclusions from the text” (Cohen et al., 2018). Researchers often indicate the quantitative nature of content analysis when they note that the researcher needs to count concepts, words or occurrences in documents and report them in tabular form. According to Cohen, Manion and Morrison (Cohen et al., 2018), the whole process of content analysis can be summarized in eleven steps:

Step 1: Define the research questions to be addressed by the content analysis. This step includes identifying one wants from the texts to be content-analysed.

Step 2: Define the population from which units of text are to be sampled. The population here refers not only to people but, mainly, to text – the domains of the analysis.

Step 3: Define the sample to be included. The key issues of sampling apply to the sampling of texts: representativeness, access, size of the sample and generalizability of the results.

Step 4: Define the context of the generation of the document. In this step one will examine, for example: how the material was generated; who was involved; who was present; whether the data are accurately reported; the authenticity and credibility of the documents etc.

Step 5: Define the units of analysis. This can be at very many levels, for example, a word, phrase, sentence, paragraph, whole text, people and themes. Krippendorff (Krippendorff, 2018) suggests five kinds of sampling units: physical (e.g. time, place, size); syntactical (words, grammar, sentences, series etc.); categorical (members of a category have something in common); propositional (delineating particular constructions or propositions); and thematic (putting texts into themes and combinations of categories).

Step 6: Decide the codes to be used in the analysis. Codes can be at different levels of specificity and generality when defining content and concepts. To be faithful to the data, the codes themselves derive from the data responsively rather than being created pre-ordinately. A code is a word or abbreviation sufficiently close to that which it is describing for the researcher to see at a glance what it means.

Step 7: Construct the categories for analysis. Categories are the main groupings of constructs or key features of the text, showing links between units of analysis.

Step 8: Conduct the coding and categorizing of the data. Once the codes and categories have been decided, the analysis can be undertaken. This concerns the actual ascription of codes and categories to the text. In coding a piece of transcription the researcher goes through the data systematically, typically line by line, and writes a descriptive code by the side of each piece of datum, for example:

Text Code
The students will use social technologies SocTECH
I prefer to teach mixed ability classes MIX

One can see that the codes here are abbreviations, enabling the researcher to immediately understand the issue that they denote because they resemble that issue.

Step 9: Conduct the data analysis. Once the data have been coded and categorized, the researcher can count the frequency of each code or word in the text, and the number of words in each category. This is the process of retrieval, which may be in multiple modes, for example words, codes, nodes and categories. To ensure reliability, Weber (Weber, 1990) suggests that it is advisable at first to work on small samples of text rather than the whole text, to test out the coding and categorization, and make amendments where necessary.

Step 10: Summarizing. By this stage, the investigator will be in a position to write a summary of the main features of the situation that have been researched so far.

Step 11: Making speculative inferences. This is an important stage, for it moves the research from description to inference.

Reliability and Validity in Qualitative Research

Patton (Patton, 2002) states that validity and reliability are two factors which any qualitative researcher should be concerned about while designing a study, analysing results and judging the quality of the study. This corresponds to the following question:

“How can an inquirer persuade his or her audiences that the research findings of an inquiry are worth paying attention to?” (Lincoln & Guba, 1985). Reliability and validity are conceptualized as trustworthiness, rigour and quality in the qualitative paradigm (Golafshani, 2003). Basic notions are summarized below.

Basic notions (Cohen et al., 2018)

Validity

In qualitative data validity might be addressed through the honesty, depth, richness and scope of the data achieved, the participants approached, the extent of triangulation and the disinterestedness and objectivity of the researcher.

  • In qualitative data the subjectivity of respondents, their opinions, attitudes, and perspective together contribute to a degree of bias. Validity then should be seen as a matter of degree rather than absolute state.
  • Types of validity: generalizability, replicability and controllability, predictability, the derivation of laws and universal statements of behaviour, context freedom, fragmentation and atomization of research, randomization of samples, observability

Maxwell (Maxwell, 1992) suggests that ‘understanding’ is a more suitable term than ‘validity’ in qualitative research-it is the meaning that subjects give to the data and inferences drawn from the data that are important. Five kinds of validity for exploring the notion of ‘understanding’:

  • descriptive validity (accuracy of the account, that is not made up, selective or distorted)
  • interpretive validity (ability of research to catch the essence of situations and events)
  • theoretical validity (the extent to which the research explains phenomena)
  • generalizability (the theory is useful in understanding similar situations)
  • evaluative validity (application of an evaluative rather than descriptive framework)

Reliability

In qualitative data often replaced with terms such as ‘credibility’, neutrality, confirmability, dependability, consistency, applicability, trustworthiness, transferability.

Replication in qualitative data includes repeating:

  • The status position of the researcher
  • The choice of informants/respondents
  • The social situations and conditions
  • The analytic constructs used
  • The methods of data collection and analysis

To address reliability:

  • Stability of observations (same researcher making similar observations in different time)
  • Parallel forms (same observations if the researcher looked at different phenomena)
  • Inter-rater reliability (another observer would have observed the same phenomena and analysed them in the same way)
  • Respondent validation.

Credibility

Can be achieved through:

  • prolonged engagement in the field
  • persistent observation
  • triangulation of methods, sources, investigators and theories
  • peer debriefing
  • negative case analysis
  • member checking

Triangulation

Use of two or more methods of data collection in the study of some aspect of human behaviour. Triangular techniques attempt to map out or explain more fully, the richness and complexity of human behaviour by studying from more than one standpoint and using both quantitative and qualitative data.

  • Benefits: enhance confidence on the data, enhance researcher’s confidence
  • Powerful way for demonstrating concurrent validity (particularly in qualitative research)

Computer Assisted Qualitative Data Analysis (CAQDAS)

Computer-assisted qualitative data analysis software offers tools that assist with qualitative research such as transcription analysis, coding and text interpretation, recursive abstraction, content analysis, discourse analysis, grounded theory methodology, among others.

In this video by EATEL, we describe how Computer Assisted Qualitative Data Analysis CAQDAS can support Qualitative Data Analysis. We demonstrate basic features of NVivo, focusing on how it can be used for entering and coding data mainly yielded from interviews, pdf, surveys, websites, among others.

What does computer software offer qualitative researchers?

CAQDAS software do not do any real analysis for us; rather, they have been designed to help researchers with the mechanization of the clerical aspects of data management, in other words, CAQDAS replace the traditional tools of qualitative data analysis (paper, pen, scissors) with tools of the digital age. The purpose of software is not to provide you with a methodological or analytic framework but rather to support you in certain tasks, such as planning and managing your project, writing analytic memos, marking and commenting on data, searching for string words or phrases, developing a coding schema, coding, recoding, organizing data, hyperlinking, mapping, and generating output.

This is a brief intro video on what qualitative data analysis software can do, and what it can’t do. It explores the functionality of qualitative data analysis software by distinguishing four main functions: Organization of Data, Annotation of Data, Searching of Data, and Display of Data. For more information on Qualitative Data Analysis Software and Qualitative Methods, visit squaremethodology.

Which is the ‘best’ CAQDAS package?

Lewins and Silver (Silver & Lewins, 2008) recognized that while this is the most frequently asked question, it is impossible to be answered. They highlight that all software have tools in common but also their own distinctive features. Currently CAQDAS software development is at an expansion stage, with a number of packages competing for the highly lucrative dominant position. Table 2 displays the most frequently mentioned software in the literature.


NVivo logo

NVivo was originally developed as NUD*IST by Tom Richards and Lyn Richards as a scroll mode Mac package at La Trobe University, Melbourne. Subsequently QSR International was formed which now develops and supports all QSR software products.

Features

  • Import and analyse images, videos, emails, webdata
  • Relationship coding
  • Charts, word clouds, word trees, explore and comparison diagrams
  • Import articles from reference management software
  • Review coding with coding stripes and highlights
  • Matrix coding, coding word frequency, text search and coding comparison queries


ATLAS.ti logo

ATLAS.ti was developed at the Technical University of Berlin as an interdisciplinary collaborative project between the psychology department, computer scientists and future users (1989-92).

Features

  • Unicode throughout
  • Undo/redo (100steps)
  • Direct import from Twitter, Endnote, Evernote
  • Powerful visual query editor for creating and modifying SmartCodes
  • Full project search with dynamic fade-in/fade-out hit categories
  • Network groups
  • Memo comments
  • Multiple documents


HyperRESEARCH logo

HyperRESEARCH was first developed in 1990 by Sharlene Hesse, Scott Kinder and Paul Dupuis in Boston, Massachusetts. From 1991 HyperRESEARCH has been developed by ResearchWare who continue to develop and support the software.

Features

  • Importer tool takes your survey from a spreadsheet
  • Automatically builds a complete study
  • Easy to use
  • Runs on multiple technological platforms


MAXqda logo

MAXqda is the latest in a software stream originally developed by Udo Kuckartz in order to manage political discourse data on the environment.

Features

  • Work with bibliographic data from reference management programmes
  • Organise and analyse literature and excerpts
  • Create literature review

Working with NVivo

In this video by NVivobyQSR, you will see a brief introduction to NVivo.

Here you will find a glossary with key terms to work with NVivo.

Instructions to install NVivo

Here you will find instructions on how to install NVivo and work with the free trial for 14 days.

Introduction to NVivo Manual

Here you will find a step-by-step guide on how to work with NVivo.