Skip to main content
All Reviews
PsychologyMust Read
beginner

Enhancing qualitative research in psychology with large language models: a methodological exploration and examples of simulations

Evgeny Smirnov (2024)

Published
Nov 30, 2024
Journal
Qualitative Research in Psychology · Vol. 22 · No. 2
DOI
10.1080/14780887.2024.2428255

At a Glance

Shows how LLMs can speed up and triangulate content analysis even for complex areas

SummaryAI

This paper matters because it translates large language models (especially GPT-4) from a vague "helpful tool" into concrete, text-based qualitative research workflows. Using a set of simulations, it illustrates how LLMs can support study planning (e.g., generating plausible mock interviews), perform directed and conventional content analysis, and evaluate narrative properties like causal coherence. The novelty is the methodological framing: LLMs are positioned as a fast, repeatable "second coder/validator" that can bolster trustworthiness (credibility, dependability, confirmability) when researchers document prompts, repeat runs, and report classification metrics. The implication is a pragmatic path for qualitative psychologists to reduce analysis time while adding structured checks—though the paper also flags key limitations (synthetic data, single-human validation) and insists on human expert verification.

Method Snapshot

LLM-based simulations

BackgroundAI

Familiarity with qualitative methods in psychology (content/narrative analysis, trustworthiness criteria) and basic LLM concepts (prompting, RAG).

LLMs can identify even complex themes, such as existential concerns, very accurately and perform both direct and conventional content analysis.

ES

Expert Review: Enhancing qualitative research in psychology with large language models: a methodological exploration and examples of simulations | Marginalia