Research Practices

Open Science

Research should be transparent enough to verify, reproducible enough to build on, and accessible enough to reach the people who need it.

Scroll to explore
01

Why Open Science Matters

Psychology has faced a replication crisis that shook confidence in published findings. Many "established" effects failed to replicate when tested rigorously. The response has been a shift toward practices that make research more transparent from the start—not just at publication.

For me, this means pre-registering studies before collecting data, sharing analysis code so others can check my work, and making materials available so studies can actually be replicated. It's not about perfection—it's about honesty.

"The first principle is that you must not fool yourself—and you are the easiest person to fool."

— Richard Feynman
02

Core Principles

These aren't just ideals—they're practices I apply to every research project.

Transparency

Document decisions as they happen, not as you wish they had happened. Share the mess, not just the polished result.

Reproducibility

If someone else can't run your analysis and get the same results, the results don't mean much.

Accessibility

Research locked behind paywalls doesn't help anyone. Share what you can, where you can.

03

Open Science Framework

I use the Open Science Framework (OSF) to organise and share research materials. Here's what you'll find there:

My OSF Repository

View OSF Profile →

Pre-registration

Hypotheses, methods, and analysis plans registered before data collection to prevent p-hacking and HARKing.

Analysis Code

R scripts and computational notebooks shared openly so results can be verified and methods adapted.

Materials & Data

Study materials, instruments, and anonymised datasets made available where ethically appropriate.

04

Transparent Use of LLMs

Large language models are increasingly used in research—for coding assistance, literature synthesis, and even qualitative analysis. This creates new questions about transparency: What was the model's role? How were outputs validated? Could the analysis be reproduced?

My Approach to AI in Research

When I use LLMs in research, I document the model, version, and prompts used. For qualitative analysis, I've developed a three-stage validation approach that combines LLM-assisted coding with human verification—balancing efficiency with rigour.

This includes being transparent about this very website, which was built with substantial AI assistance. The goal isn't to hide the tools—it's to be clear about how they were used and what human judgment was applied.

Detailed methodology guide coming soon

Three-Stage Validation Process

1

LLM-Assisted First Pass

Use the LLM to generate initial codes or themes, documenting the exact prompts used. This provides a starting point, not a final answer.

2

Human Verification

Independently review the LLM's outputs against the source data. Correct errors, add nuance, and document disagreements between human and machine coding.

3

Transparency Reporting

Report exactly how the LLM was used, what was changed after human review, and provide all prompts and outputs for others to scrutinise.