Medical-research-skills meta-screening-fulltext
Screen full-text papers against inclusion/exclusion criteria, with optional PubMed metadata check using PMID. Use when the user needs to evaluate a paper for a meta-analysis.
install
source · Clone the upstream repo
git clone https://github.com/aipoch/medical-research-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/aipoch/medical-research-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/scientific-skills/Data Analysis/meta-screening-fulltext" ~/.claude/skills/aipoch-medical-research-skills-meta-screening-fulltext && rm -rf "$T"
manifest:
scientific-skills/Data Analysis/meta-screening-fulltext/SKILL.mdsource content
Paper Screening (Full Text + PubMed)
This skill screens a medical paper to determine if it should be included in a meta-analysis based on PICO criteria. It can optionally fetch metadata (Title/Abstract) from PubMed if a PMID is provided.
When to Use
- Use this skill when you need screen full-text papers against inclusion/exclusion criteria, with optional pubmed metadata check using pmid. use when the user needs to evaluate a paper for a meta-analysis in a reproducible workflow.
- Use this skill when a data analytics task needs a packaged method instead of ad-hoc freeform output.
- Use this skill when the user expects a concrete deliverable, validation step, or file-based result.
- Use this skill when
is the most direct path to complete the request.scripts/extract_pdf.py - Use this skill when you need the
package behavior rather than a generic answer.meta-screening-fulltext
Key Features
- Scope-focused workflow aligned to: Screen full-text papers against inclusion/exclusion criteria, with optional PubMed metadata check using PMID. Use when the user needs to evaluate a paper for a meta-analysis.
- Packaged executable path(s):
.scripts/extract_pdf.py - Reference material available in
for task-specific guidance.references/ - Structured execution path designed to keep outputs consistent and reviewable.
Dependencies
:Python
. Repository baseline for current packaged skills.3.10+
:Third-party packages
. Add pinned versions if this skill needs stricter environment control.not explicitly version-pinned in this skill package
Example Usage
cd "20260316/scientific-skills/Data Analytics/meta-screening-fulltext" python -m py_compile scripts/extract_pdf.py python scripts/extract_pdf.py --help
Example run plan:
- Confirm the user input, output path, and any required config values.
- Edit the in-file
block or documented parameters if the script uses fixed settings.CONFIG - Run
with the validated inputs.python scripts/extract_pdf.py - Review the generated output and return the final artifact with any assumptions called out.
Implementation Details
See
## Workflow above for related details.
- Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
- Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
- Primary implementation surface:
.scripts/extract_pdf.py - Reference guidance:
contains supporting rules, prompts, or checklists.references/ - Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
- Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.
Workflow
-
Analyze Inputs:
: Full text of the paper.input_paper
: Inclusion/Exclusion criteria.inclu_exclu_criterion
(Optional): PMID of the paper.input_pmid
-
Check PubMed (Optional):
- If
is provided, runinput_pmid
to fetch Title and Abstract.scripts/query_pubmed.py - Command:
python scripts/query_pubmed.py "<input_pmid>"
- If
-
Screen Paper:
- Scenario A: PubMed Hit: If the script returns metadata, compare the criteria against this data (Title + Abstract).
- Scenario B: No PubMed Data: Compare the criteria against
(full text).input_paper - Use the appropriate prompt from
.references/screening_prompts.md
-
Format Output:
- Ensure the output is a JSON object with
("Include" or "Exclude") andResult
.Reason - If "Exclude", the reason must be one of the standard exclusion categories (Wrong population, etc.).
- Ensure the output is a JSON object with
Quality Rules
- Evidence-Based: Decisions must be based strictly on the provided text or retrieved metadata.
- Structured Output: Final output must always be parseable JSON.
- Exclusion Reasons: Must use standard terminology: "Wrong population", "Wrong intervention", "Wrong comparator", "Wrong outcomes", "Wrong study design".
Helper Scripts
PDF Text Extraction
When the user provides a PDF file path, use
extract_pdf.py to extract the text content before assessment: