Enhancing qualitative research in psychology with large language models: a methodological exploration and examples of simulations
Evgeny Smirnov · 2024 · Qualitative Research in Psychology
At a Glance
Shows how LLMs can speed up and triangulate content analysis even for complex areas
SummaryAI
This paper matters because it translates large language models (especially GPT-4) from a vague "helpful tool" into concrete, text-based qualitative research workflows. Using a set of simulations, it illustrates how LLMs can support study planning (e.g., generating plausible mock interviews), perform directed and conventional content analysis, and evaluate narrative properties like causal coherence. The novelty is the methodological framing: LLMs are positioned as a fast, repeatable "second coder/validator" that can bolster trustworthiness (credibility, dependability, confirmability) when researchers document prompts, repeat runs, and report classification metrics. The implication is a pragmatic path for qualitative psychologists to reduce analysis time while adding structured checks—though the paper also flags key limitations (synthetic data, single-human validation) and insists on human expert verification.
LLMs can identify even complex themes, such as existential concerns, very accurately and perform both direct and conventional content analysis.
— ES
- Method:
- LLM-based simulations
- Background:AI
- Familiarity with qualitative methods in psychology (content/narrative analysis, trustworthiness criteria) and basic LLM concepts (prompting, RAG).