Awesome-Agent-Skills-for-Empirical-Research psychology-research-guide

Psychological research methods, experimental design, and analysis

install
source · Clone the upstream repo
git clone https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/43-wentorai-research-plugins/skills/domains/social-science/psychology-research-guide" ~/.claude/skills/brycewang-stanford-awesome-agent-skills-for-empirical-research-psychology-resear && rm -rf "$T"
manifest: skills/43-wentorai-research-plugins/skills/domains/social-science/psychology-research-guide/SKILL.md
source content

Psychology Research Guide

Overview

Psychology is the scientific study of mind and behavior, spanning cognitive processes, social influence, developmental trajectories, clinical disorders, and neuroscience. The field has undergone a methodological revolution since the replication crisis of the 2010s, with new standards for statistical rigor, pre-registration, transparency, and open science fundamentally reshaping how research is conducted and evaluated.

This guide covers the practical aspects of conducting psychology research in the post-replication-crisis era: experimental design with adequate power, pre-registration, appropriate statistical analysis, effect size reporting, and the tools and platforms that support reproducible psychological science. The focus is on what reviewers and editors at top journals now expect.

Whether you are designing a behavioral experiment, analyzing survey data, conducting a psychometric validation, or reviewing a manuscript, these patterns reflect current best practices in the field.

Experimental Design

Between-Subjects vs. Within-Subjects

DesignAdvantagesDisadvantagesWhen to Use
Between-subjectsNo carryover effects, simplerRequires more participants, individual differencesDeception studies, one-shot manipulations
Within-subjectsMore power, fewer participantsOrder effects, demand characteristicsPerception, memory, reaction time
MixedCombines benefitsComplex analysisTreatment x individual difference

Power Analysis Before You Collect Data

from statsmodels.stats.power import TTestIndPower, FTestAnovaPower
import numpy as np

# Two-sample t-test power analysis
analysis = TTestIndPower()

# Question: "How many participants per group for d=0.5, power=0.80?"
n_per_group = analysis.solve_power(
    effect_size=0.5,     # Cohen's d (medium effect)
    alpha=0.05,
    power=0.80,
    alternative="two-sided",
)
print(f"Required N per group: {int(np.ceil(n_per_group))}")  # 64

# For small effects (d=0.2), which are common after replication
n_small = analysis.solve_power(effect_size=0.2, alpha=0.05, power=0.80)
print(f"Required N per group for d=0.2: {int(np.ceil(n_small))}")  # 394

# One-way ANOVA (3 groups)
anova_analysis = FTestAnovaPower()
n_anova = anova_analysis.solve_power(
    effect_size=0.25,    # Cohen's f (medium)
    alpha=0.05,
    power=0.80,
    k_groups=3,
)
print(f"Required N per group (ANOVA): {int(np.ceil(n_anova))}")  # 53

Effect Sizes That Reviewers Expect

MeasureSmallMediumLargeUse For
Cohen's d0.20.50.8Group differences
Pearson r0.10.30.5Correlations
Cohen's f0.10.250.4ANOVA effects
eta-squared0.010.060.14ANOVA variance explained
Odds ratio1.52.54.0Binary outcomes
Cohen's w0.10.30.5Chi-squared tests

Important: Post-replication-crisis psychology finds that most real effects are small (d = 0.2-0.4). Design for small effects unless you have strong prior evidence for larger ones.

Pre-Registration

What to Pre-Register

Pre-registration template (AsPredicted.org format):

1. HYPOTHESES
   H1: Participants in the gratitude condition will report higher
   life satisfaction (SWLS scores) than those in the control
   condition (d >= 0.3).

2. DESIGN
   - 2 (gratitude vs. control) between-subjects
   - Random assignment via Qualtrics randomizer

3. PLANNED SAMPLE
   - N = 200 per condition (400 total)
   - Power: 0.90 for d = 0.3 at alpha = 0.05
   - Recruitment: Prolific, US residents, 18-65

4. EXCLUSION CRITERIA (stated before data collection)
   - Failed attention check (embedded in survey)
   - Completion time < 3 minutes or > 30 minutes
   - Duplicate IP addresses

5. MEASURED VARIABLES
   - DV: Satisfaction With Life Scale (SWLS; Diener et al., 1985)
   - Manipulation check: "How grateful do you feel right now?" (1-7)
   - Covariates: Age, gender, baseline mood (PANAS)

6. ANALYSIS PLAN
   - Primary: Independent samples t-test on SWLS scores
   - Secondary: ANCOVA controlling for baseline PANAS-PA
   - Exploratory: Moderation by trait gratitude (GQ-6)

7. ANYTHING ELSE
   - All deviations from this plan will be labeled as exploratory
   - We will report all conditions and all measures

Pre-Registration Platforms

PlatformStrengthsJournal Integration
OSF RegistriesMost widely used, free, flexibleRegistered Reports at 300+ journals
AsPredicted.orgSimple, private until you shareWidely accepted
ClinicalTrials.govRequired for clinical studiesFDA-mandated
EGAPPolitical science, field experimentsAPSR, AJPS

Statistical Analysis

The Modern Analysis Workflow

import pandas as pd
import pingouin as pg
from scipy import stats

# Load data
df = pd.read_csv("experiment_data.csv")

# Step 1: Descriptive statistics by condition
descriptives = df.groupby("condition").agg(
    n=("dv", "count"),
    mean=("dv", "mean"),
    sd=("dv", "std"),
    median=("dv", "median"),
).round(3)

# Step 2: Check assumptions
# Normality
for condition in df["condition"].unique():
    subset = df[df["condition"] == condition]["dv"]
    stat, p = stats.shapiro(subset)
    print(f"{condition}: Shapiro-Wilk W={stat:.3f}, p={p:.3f}")

# Homogeneity of variance
levene_stat, levene_p = stats.levene(
    df[df["condition"] == "treatment"]["dv"],
    df[df["condition"] == "control"]["dv"],
)

# Step 3: Primary analysis with effect size and CI
result = pg.ttest(
    df[df["condition"] == "treatment"]["dv"],
    df[df["condition"] == "control"]["dv"],
    paired=False,
    alternative="two-sided",
)
print(result[["T", "dof", "p-val", "cohen-d", "CI95%", "BF10"]])

# Step 4: Bayesian analysis (increasingly expected)
bf10 = float(result["BF10"].values[0])
print(f"Bayes Factor BF10 = {bf10:.2f}")
if bf10 > 10:
    print("Strong evidence for H1")
elif bf10 > 3:
    print("Moderate evidence for H1")
elif bf10 > 1:
    print("Anecdotal evidence for H1")
else:
    print("Evidence favors H0")

ANOVA with Post-Hoc Comparisons

# One-way ANOVA
aov = pg.anova(dv="score", between="group", data=df, detailed=True)
print(aov)

# Effect size (eta-squared and omega-squared)
print(f"Eta-squared: {aov['np2'].values[0]:.3f}")

# Post-hoc pairwise comparisons with correction
posthoc = pg.pairwise_tukey(dv="score", between="group", data=df)
print(posthoc)

# Mixed ANOVA (between + within)
mixed = pg.mixed_anova(
    dv="score", between="group", within="time",
    subject="participant_id", data=df_long
)
print(mixed)

Psychometric Validation

# Scale reliability
from pingouin import cronbach_alpha

items = df[["item1", "item2", "item3", "item4", "item5"]]
alpha, ci = cronbach_alpha(items)
print(f"Cronbach's alpha = {alpha:.3f}, 95% CI = [{ci[0]:.3f}, {ci[1]:.3f}]")

# Confirmatory Factor Analysis (using semopy)
from semopy import Model

model_spec = """
factor1 =~ item1 + item2 + item3
factor2 =~ item4 + item5 + item6
"""
model = Model(model_spec)
model.fit(df)
print(model.inspect())

# Fit indices
stats_result = model.calc_stats()
print(f"CFI = {stats_result.loc['CFI', 'Value']:.3f}")
print(f"RMSEA = {stats_result.loc['RMSEA', 'Value']:.3f}")
print(f"SRMR = {stats_result.loc['SRMR', 'Value']:.3f}")

Reporting Results (APA Format)

Standard reporting patterns:

t-test:
"Participants in the gratitude condition (M = 5.23, SD = 1.12) reported
significantly higher life satisfaction than those in the control condition
(M = 4.67, SD = 1.08), t(398) = 4.89, p < .001, d = 0.49, 95% CI [0.29, 0.69]."

ANOVA:
"There was a significant main effect of group on performance,
F(2, 297) = 8.43, p < .001, eta-p-squared = .054."

Correlation:
"Life satisfaction was positively correlated with gratitude,
r(198) = .42, p < .001, 95% CI [.30, .53]."

Always include: test statistic, df, p-value, effect size, confidence interval.

Best Practices

  • Power for small effects. Assume d = 0.2-0.4 unless prior meta-analyses suggest otherwise.
  • Pre-register everything. Even exploratory studies benefit from stating what is confirmatory vs. exploratory.
  • Report all measures and conditions. Selective reporting is the primary source of false positives.
  • Use Bayesian statistics alongside frequentist tests to quantify evidence for the null.
  • Share data and code on OSF. Transparency is now a condition for publication at many journals.
  • Distinguish statistical from practical significance. A p < .001 with d = 0.05 is not meaningful.

References

  • Open Science Framework (OSF) -- Pre-registration, data sharing, collaboration
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology. Psychological Science, 22(11), 1359-1366.
  • Cumming, G. (2014). The New Statistics: Why and How. Psychological Science, 25(1), 7-29.
  • pingouin -- Python statistical package for psychology
  • PsychDS -- Data standard for psychology datasets