Transparent Use of LLMs
Large language models are increasingly used in research—for coding assistance,
literature synthesis, and even qualitative analysis. This creates new questions
about transparency: What was the model's role? How were outputs validated?
Could the analysis be reproduced?
My Approach to AI in Research
When I use LLMs in research, I document the model, version, and prompts used.
For qualitative analysis, I've developed a three-stage validation approach that
combines LLM-assisted coding with human verification—balancing efficiency with rigour.
This includes being transparent about this very website, which was built with
substantial AI assistance. The goal isn't to hide the tools—it's to be clear about
how they were used and what human judgment was applied.
Detailed methodology guide coming soon
Three-Stage Validation Process
1
LLM-Assisted First Pass
Use the LLM to generate initial codes or themes, documenting the exact prompts used. This provides a starting point, not a final answer.
2
Human Verification
Independently review the LLM's outputs against the source data. Correct errors, add nuance, and document disagreements between human and machine coding.
3
Transparency Reporting
Report exactly how the LLM was used, what was changed after human review, and provide all prompts and outputs for others to scrutinise.