Canvas-mcp canvas-peer-review-manager
Educator peer review management for Canvas LMS. Tracks completion rates, analyzes comment quality, flags problematic reviews, sends targeted reminders, and generates instructor-ready reports. Trigger phrases include "peer review status", "how are peer reviews going", "who hasn't reviewed", "review quality", or any peer review follow-up task.
git clone https://github.com/vishalsachdev/canvas-mcp
T=$(mktemp -d) && git clone --depth=1 https://github.com/vishalsachdev/canvas-mcp "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/canvas-peer-review-manager" ~/.claude/skills/vishalsachdev-canvas-mcp-canvas-peer-review-manager && rm -rf "$T"
skills/canvas-peer-review-manager/SKILL.mdCanvas Peer Review Manager
A complete peer review management workflow for educators using Canvas LMS. Monitor completion, analyze quality, identify students who need follow-up, send reminders, and export data -- all through MCP tool calls against the Canvas API.
Prerequisites
- Canvas MCP server must be running and connected to the agent's MCP client (e.g., Claude Code, Cursor, Codex, OpenCode).
- The authenticated user must have an educator or instructor role in the target Canvas course.
- The assignment must have peer reviews enabled in Canvas (either manual or automatic assignment).
- FERPA compliance: Set
in the Canvas MCP server environment to anonymize student names. When enabled, names render asENABLE_DATA_ANONYMIZATION=true
hashes while preserving functional user IDs for messaging.Student_xxxxxxxx
Steps
1. Identify the Assignment
Ask the user which course and assignment to manage peer reviews for. Accept a course code, Canvas ID, or course name, plus an assignment name or ID.
If the user does not specify, prompt:
Which course and assignment would you like to check peer reviews for?
Use
list_courses and list_assignments to help the user find the right identifiers.
2. Check Peer Review Completion
Call
get_peer_review_completion_analytics with the course identifier and assignment ID. This returns:
- Overall completion rate (percentage)
- Number of students with all reviews complete, partial, and none complete
- Per-student breakdown showing completed vs. assigned reviews
Key data points to surface:
| Metric | What It Tells You |
|---|---|
| Completion rate | Overall health of the peer review cycle |
| "None complete" count | Students who haven't started -- highest priority for reminders |
| "Partial complete" count | Students who started but didn't finish |
| Per-student breakdown | Exactly who needs follow-up |
3. Review the Assignment Mapping
If the user wants to understand who is reviewing whom, call
get_peer_review_assignments with:
for human-readable outputinclude_names=true
for submission contextinclude_submission_details=true
This shows the full reviewer-to-reviewee mapping with completion status.
4. Extract and Read Comments
Call
get_peer_review_comments to retrieve actual comment text. Parameters:
-- who wrote the commentinclude_reviewer_info=true
-- who received the commentinclude_reviewee_info=true
-- recommended when sharing results or working with sensitive dataanonymize_students=true
This reveals what students actually wrote in their reviews.
5. Analyze Comment Quality
Call
analyze_peer_review_quality to generate quality metrics across all reviews. The analysis includes:
- Average quality score (1-5 scale)
- Word count statistics (mean, median, range)
- Constructiveness analysis (constructive feedback vs. generic comments vs. specific suggestions)
- Sentiment distribution (positive, neutral, negative)
- Flagged reviews that fall below quality thresholds
Optionally pass
analysis_criteria as a JSON string to customize what counts as high/low quality.
6. Flag Problematic Reviews
Call
identify_problematic_peer_reviews to automatically flag reviews needing instructor attention. Flagging criteria include:
- Very short or empty comments
- Generic responses (e.g., "looks good", "nice work")
- Lack of constructive feedback
- Potential copy-paste or identical reviews
Pass custom
criteria as a JSON string to override default thresholds.
7. Get the Follow-up List
Call
get_peer_review_followup_list to get a prioritized list of students requiring action:
-- students with zero reviews completedpriority_filter="urgent"
-- students with partial completionpriority_filter="medium"
-- everyone who needs follow-uppriority_filter="all"
-- adjusts urgency calculation based on days since assignmentdays_threshold=3
8. Send Reminders
Always use a dry run or review step before sending messages.
For targeted reminders, call
send_peer_review_reminders with:
-- list of Canvas user IDs from the analytics resultsrecipient_ids
-- optional custom text (a default template is used if omitted)custom_message
-- defaults to "Peer Review Reminder"subject_prefix
Example flow:
- Get incomplete reviewers from step 2
- Extract their user IDs
- Review the recipient list with the user
- Send reminders after confirmation
For a fully automated pipeline, call
send_peer_review_followup_campaign with just the course identifier and assignment ID. This tool:
- Runs completion analytics automatically
- Segments students into "urgent" (none complete) and "partial" groups
- Sends appropriately toned reminders to each group
- Returns combined analytics and messaging results
Warning: The campaign tool sends real messages. Always confirm with the instructor before running it.
9. Export Data
Call
extract_peer_review_dataset to export all peer review data for external analysis:
oroutput_format="csv"output_format="json"
-- appends quality metrics to the exportinclude_analytics=true
-- recommended for sharing or archivalanonymize_data=true
-- saves to a local file; set tosave_locally=true
to return data inlinefalse
10. Generate Instructor Reports
Call
generate_peer_review_feedback_report for a formatted, shareable report:
-- full analysis with samples of low-quality reviewsreport_type="comprehensive"
-- executive overview onlyreport_type="summary"
-- per-student breakdownreport_type="individual"
-- recommended for FERPA complianceinclude_student_names=false
For a completion-focused report (rather than quality-focused), use
generate_peer_review_report with options for executive summary, student details, action items, and timeline analysis. This report can be saved to a file with save_to_file=true.
Use Cases
"How are peer reviews going?" Run steps 1-2. Present completion rate, highlight any concerning patterns (e.g., "Only 60% complete, 8 students haven't started").
"Who hasn't done their reviews?" Run steps 1-2, then step 7 with
priority_filter="urgent". List the students who need follow-up.
"Are the reviews any good?" Run steps 4-6. Present quality scores, flag generic or low-effort reviews, and surface recommendations.
"Send reminders to stragglers" Run steps 1-2 to identify incomplete reviewers, then step 8. Always confirm the recipient list before sending.
"Give me a full report" Run steps 2, 5, 6, and 10. Combine completion analytics with quality analysis into a comprehensive instructor report.
"Export everything for my records" Run step 9 with
output_format="csv" and anonymize_data=true for a FERPA-safe dataset.
MCP Tools Used
| Tool | Purpose |
|---|---|
| Discover active courses |
| Find assignments with peer reviews enabled |
| Full reviewer-to-reviewee mapping |
| Completion rates and per-student breakdown |
| Extract actual comment text |
| Quality metrics (scores, word counts, constructiveness) |
| Flag low-quality or empty reviews |
| Prioritized list of students needing follow-up |
| Send targeted reminder messages |
| Automated analytics-to-messaging pipeline |
| Export data as CSV or JSON |
| Quality-focused instructor report |
| Completion-focused instructor report |
Example
User: "How are peer reviews going for Assignment 3 in BADM 350?"
Agent: Calls
get_peer_review_completion_analytics and presents:
## Peer Review Status: Assignment 3 - **Completion rate:** 72% (23/32 students fully complete) - **Partial:** 5 students (started but not finished) - **Not started:** 4 students ### Students Needing Follow-up **Not started (urgent):** - Student_a8f7e23 (0 of 3 reviews done) - Student_b2c91d4 (0 of 3 reviews done) - Student_f5e67a1 (0 of 3 reviews done) - Student_d9c34b2 (0 of 3 reviews done) **Partial (needs nudge):** - Student_c1d82e5 (1 of 3 reviews done) - Student_e4f03a9 (2 of 3 reviews done)
User: "Send reminders to the ones who haven't started"
Agent: Confirms the 4 recipients, then calls
send_peer_review_reminders with their user IDs.
User: "Now check if the completed reviews are any good"
Agent: Calls
analyze_peer_review_quality and presents quality scores, flags 3 reviews as too short, and recommends the instructor follow up with specific students.
Safety Guidelines
- Confirm before sending -- Always present the recipient list and message content to the instructor before calling any messaging tool.
- Use dry runs -- When testing workflows, start with a single recipient or confirm the output of analytics tools before acting on the data.
- Anonymize by default -- Use
oranonymize_students=true
when reviewing data in shared contexts.anonymize_data=true - Respect rate limits -- The Canvas API allows roughly 700 requests per 10 minutes. For large courses, the messaging tools send messages sequentially with built-in delays.
- FERPA compliance -- Never display student names in logs, shared screens, or exported files unless the instructor has explicitly confirmed the context is appropriate.
Notes
- Peer reviews must be enabled on the assignment in Canvas before any of these tools return data.
- The
tool combines analytics and messaging into one call -- powerful but sends real messages. Use it only after confirming intent with the instructor.send_peer_review_followup_campaign - Quality analysis uses heuristics (word count, keyword matching, sentiment). It identifies likely low-quality reviews but is not a substitute for instructor judgment.
- This skill pairs well with
for a full course health overview that includes peer review status alongside submission rates and grade distribution.canvas-morning-check