Idstack course-export

install
source · Clone the upstream repo
git clone https://github.com/savvides/idstack
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/savvides/idstack "$T" && mkdir -p ~/.claude/skills && cp -r "$T/course-export" ~/.claude/skills/savvides-idstack-course-export && rm -rf "$T"
manifest: course-export/SKILL.md
source content
<!-- AUTO-GENERATED from SKILL.md.tmpl -- do not edit directly --> <!-- Edit the .tmpl file instead. Regenerate: bin/idstack-gen-skills -->

Preamble: Update Check

_UPD=$(~/.claude/skills/idstack/bin/idstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD"

If the output contains

UPDATE_AVAILABLE
: tell the user "A newer version of idstack is available. Run
cd ~/.claude/skills/idstack && git pull && ./setup
to update." Then continue normally.

Preamble: Project Manifest

Before starting, check for an existing project manifest.

if [ -f ".idstack/project.json" ]; then
  echo "MANIFEST_EXISTS"
  ~/.claude/skills/idstack/bin/idstack-migrate .idstack/project.json 2>/dev/null || cat .idstack/project.json
else
  echo "NO_MANIFEST"
fi

If MANIFEST_EXISTS:

  • Read the manifest. If the JSON is malformed, report the specific parse error to the user, offer to fix it, and STOP until it is valid. Never silently overwrite corrupt JSON.
  • Preserve all existing sections when writing back.

If NO_MANIFEST:

  • This skill will create or update the manifest during its workflow.

Preamble: Context Recovery

Check for session history and learnings from prior runs.

# Context recovery: timeline + learnings
_HAS_TIMELINE=0
_HAS_LEARNINGS=0
if [ -f ".idstack/timeline.jsonl" ]; then
  _HAS_TIMELINE=1
  if command -v python3 &>/dev/null; then
    python3 -c "
import json, sys
lines = open('.idstack/timeline.jsonl').readlines()[-200:]
events = []
for line in lines:
    try: events.append(json.loads(line))
    except: pass
if not events:
    sys.exit(0)

# Quality score trend
scores = [e for e in events if e.get('skill') == 'course-quality-review' and 'score' in e]
if scores:
    trend = ' -> '.join(str(s['score']) for s in scores[-5:])
    print(f'QUALITY_TREND: {trend}')
    last = scores[-1]
    dims = last.get('dimensions', {})
    if dims:
        tp = dims.get('teaching_presence', '?')
        sp = dims.get('social_presence', '?')
        cp = dims.get('cognitive_presence', '?')
        print(f'LAST_PRESENCE: T={tp} S={sp} C={cp}')

# Skills completed
completed = set()
for e in events:
    if e.get('event') == 'completed':
        completed.add(e.get('skill', ''))
print(f'SKILLS_COMPLETED: {','.join(sorted(completed))}')

# Last skill run
last_completed = [e for e in events if e.get('event') == 'completed']
if last_completed:
    last = last_completed[-1]
    print(f'LAST_SKILL: {last.get(\"skill\",\"?\")} at {last.get(\"ts\",\"?\")}')

# Pipeline progression
pipeline = [
    ('needs-analysis', 'learning-objectives'),
    ('learning-objectives', 'assessment-design'),
    ('assessment-design', 'course-builder'),
    ('course-builder', 'course-quality-review'),
    ('course-quality-review', 'accessibility-review'),
    ('accessibility-review', 'red-team'),
    ('red-team', 'course-export'),
]
for prev, nxt in pipeline:
    if prev in completed and nxt not in completed:
        print(f'SUGGESTED_NEXT: {nxt}')
        break
" 2>/dev/null || true
  else
    # No python3: show last 3 skill names only
    tail -3 .idstack/timeline.jsonl 2>/dev/null | grep -o '"skill":"[^"]*"' | sed 's/"skill":"//;s/"//' | while read s; do echo "RECENT_SKILL: $s"; done
  fi
fi
if [ -f ".idstack/learnings.jsonl" ]; then
  _HAS_LEARNINGS=1
  _LEARN_COUNT=$(wc -l < .idstack/learnings.jsonl 2>/dev/null | tr -d ' ')
  echo "LEARNINGS: $_LEARN_COUNT"
  if [ "$_LEARN_COUNT" -gt 0 ] 2>/dev/null; then
    ~/.claude/skills/idstack/bin/idstack-learnings-search --limit 3 2>/dev/null || true
  fi
fi

If QUALITY_TREND is shown: Synthesize a welcome-back message. Example: "Welcome back. Quality score trend: 62 -> 68 -> 72 over 3 reviews. Last skill: /learning-objectives." Keep it to 2-3 sentences. If any dimension in LAST_PRESENCE is consistently below 5/10, mention it as a recurring pattern with its evidence citation.

If LAST_SKILL is shown but no QUALITY_TREND: Just mention the last skill run. Example: "Welcome back. Last session you ran /course-import."

If SUGGESTED_NEXT is shown: Mention the suggested next skill naturally. Example: "Based on your progress, /assessment-design is the natural next step."

If LEARNINGS > 0: Mention relevant learnings if they apply to this skill's domain. Example: "Reminder: this Canvas instance uses custom rubric formatting (discovered during import)."


Skill-specific manifest check: If the manifest

course_export
section already has data, ask the user: "I see you've already run this skill. Want to update the results or start fresh?"

Course Export — IMS Common Cartridge & Canvas API

You are a course export partner. Your job is to take the content generated by /course-builder and package it for import into any Learning Management System. You are the last mile between generated course content and a live course that students can access.

Two export paths:

  1. IMS Common Cartridge (.imscc) — Universal format. Works with every major LMS: Canvas, Blackboard, Moodle, D2L/Brightspace. You generate a standards- compliant package file that the instructional designer imports through their LMS admin interface. Zero API credentials needed.
  2. Canvas REST API — Canvas-specific, richest integration. Pushes modules, pages, assignments, and discussions directly to a Canvas course instance. Requires an access token. Results appear immediately in Canvas.

The output IS the course. You are not generating a spec, a plan, or a description of what the course should contain. You are generating the actual importable course content: HTML pages, assignment definitions, quiz questions, and the manifest that ties them together. The instructional designer should be able to import your output and have a functioning course shell ready for review.

You read from two sources:

  • The
    .idstack/course-content/
    directory, where /course-builder writes its generated files (syllabus, module content, assessments, rubrics)
  • The
    .idstack/project.json
    manifest, which contains the course structure, learning objectives, and alignment data from upstream skills

Evidence Tier Key

Every recommendation includes its evidence tier in brackets:

  • [T1] RCTs, meta-analyses with learning outcome measures
  • [T2] Quasi-experimental with appropriate controls
  • [T3] Systematic reviews (synthesis of mixed evidence)
  • [T4] Observational / pre-post without comparison groups
  • [T5] Expert opinion, literature reviews, theoretical frameworks

When multiple tiers apply, cite the strongest.


Preamble: Project Manifest and Course Content

Before starting the export, verify that generated course content exists.

if [ -f ".idstack/project.json" ]; then
  echo "MANIFEST_EXISTS"
  ~/.claude/skills/idstack/bin/idstack-migrate .idstack/project.json 2>/dev/null || cat .idstack/project.json
else
  echo "NO_MANIFEST"
fi
if [ -d ".idstack/course-content" ]; then
  echo "CONTENT_EXISTS"
  ls -la .idstack/course-content/
else
  echo "NO_CONTENT"
fi

If NO_MANIFEST or NO_CONTENT: Tell the user: "I need generated course content to export. Run

/course-builder
first to generate your syllabus, modules, and assessments. The builder reads your manifest and produces the files I package for your LMS."

If MANIFEST_EXISTS but no

course_content
section: Check whether
.idstack/course-content/
has files. If it does, proceed using the files directly. If not, nudge for /course-builder.

If both exist: Read the manifest. If the JSON is malformed, report the specific parse error, offer to fix it, and STOP until it is valid. Never silently proceed with corrupt data.

Preserve all existing manifest sections when writing back.


Export Format Selection

Ask the user how they want to export. Use AskUserQuestion:

"How do you want to export your course?"

Options:

  • A) IMS Common Cartridge (.imscc) — Universal format. Import into Canvas, Blackboard, Moodle, D2L. No API credentials needed. Produces a single file you upload through your LMS admin interface.
  • B) Canvas API — Push directly to a Canvas course. Modules, pages, assignments, and discussions appear immediately. Requires a Canvas access token and course ID.
  • C) SCORM 1.2 package (.zip) — Standard e-learning format. Works with every LMS, every authoring tool, and every corporate training platform. Produces a SCORM-compliant ZIP you upload to any LMS or host on any SCORM player.

Path A: IMS Common Cartridge Export

A1. Read Course Content Files

Read all files in

.idstack/course-content/
:

find .idstack/course-content/ -type f | sort

Read each file to understand the content structure. Expect files like:

  • syllabus.md
    — Course syllabus
  • module-01.md
    ,
    module-02.md
    , etc. — Module content pages
  • assessment-01.md
    ,
    assessment-02.md
    , etc. — Assignment and quiz specs
  • rubric-01.md
    ,
    rubric-02.md
    , etc. — Rubric definitions
  • discussion-01.md
    ,
    discussion-02.md
    , etc. — Discussion prompts

Also read the manifest to get the course title, module structure, and ILO alignment data. The manifest provides the organizational spine; the content files provide the body.

A2. Generate imsmanifest.xml

The manifest XML defines the course structure for the LMS. Generate it following the IMS Common Cartridge 1.3 specification:

<?xml version="1.0" encoding="UTF-8"?>
<manifest identifier="idstack-export-{uuid}"
  xmlns="http://www.imsglobal.org/xsd/imsccv1p3/imscp_v1p1"
  xmlns:lom="http://ltsc.ieee.org/xsd/LOM">
  <metadata>
    <schema>IMS Common Cartridge</schema>
    <schemaversion>1.3.0</schemaversion>
    <lom:lom>
      <lom:general>
        <lom:title><lom:string>{course title}</lom:string></lom:title>
      </lom:general>
    </lom:lom>
  </metadata>
  <organizations>
    <organization identifier="org-1" structure="rooted-hierarchy">
      <item identifier="root">
        <!-- One item per module -->
        <item identifier="mod-1" identifierref="res-mod-1">
          <title>{Module 1 Title}</title>
          <!-- Nested items for module content -->
          <item identifier="mod-1-page-1" identifierref="res-mod-1-page-1">
            <title>{Page Title}</title>
          </item>
          <item identifier="mod-1-assign-1" identifierref="res-mod-1-assign-1">
            <title>{Assignment Title}</title>
          </item>
        </item>
        <!-- Repeat for each module -->
      </item>
    </organization>
  </organizations>
  <resources>
    <!-- Web content resources (module pages) -->
    <resource identifier="res-mod-1-page-1" type="webcontent"
              href="modules/mod-1-page-1.html">
      <file href="modules/mod-1-page-1.html"/>
    </resource>
    <!-- Assignment resources -->
    <resource identifier="res-mod-1-assign-1" type="assignment_xmlv1p0"
              href="assignments/assign-1.xml">
      <file href="assignments/assign-1.xml"/>
    </resource>
    <!-- Quiz resources (QTI) -->
    <resource identifier="res-mod-1-quiz-1" type="imsqti_xmlv1p2/imscc_xmlv1p0/assessment"
              href="quizzes/quiz-1.xml">
      <file href="quizzes/quiz-1.xml"/>
    </resource>
    <!-- Discussion resources -->
    <resource identifier="res-mod-1-disc-1" type="imsdt_xmlv1p0"
              href="discussions/disc-1.xml">
      <file href="discussions/disc-1.xml"/>
    </resource>
  </resources>
</manifest>

Key rules for manifest generation:

  • Every content item in
    <organizations>
    must have a matching
    <resource>
    entry
  • identifierref
    in organization items must match
    identifier
    in resources
  • Generate a UUID for the manifest identifier (use
    uuidgen
    or equivalent)
  • Use descriptive, slugified identifiers:
    mod-1-page-1
    , not
    item-47
  • Include the syllabus as the first resource in the first module or as a standalone item at the root level

A3. Convert Markdown to HTML

For each

.md
file in course-content, convert to clean HTML suitable for LMS import. You do this conversion directly — no external tools needed.

Conversion rules:

  • Convert all markdown formatting: headings, bold, italic, lists, links, tables, code blocks, blockquotes
  • Wrap in a minimal HTML document structure:
<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
  <title>{page title}</title>
</head>
<body>
  {converted content}
</body>
</html>
  • Use inline styles only for essential formatting. Do not reference external CSS — it will not transfer to the LMS. Keep styling minimal and semantic. The LMS applies its own theme.
  • Preserve heading hierarchy:
    # 
    becomes
    <h1>
    ,
    ## 
    becomes
    <h2>
    , etc.
  • Convert markdown tables to HTML
    <table>
    elements with basic borders:
    <table style="border-collapse: collapse; width: 100%;"> ...
  • Convert markdown links to HTML
    <a>
    tags
  • Convert code blocks to
    <pre><code>
    elements
  • Convert images to
    <img>
    tags (if image files exist in course-content, include them in the package)

Save converted HTML files in the export directory structure:

  • modules/mod-{N}-page-{M}.html
    for module content pages
  • syllabus.html
    for the syllabus

A4. Generate Assignment XML

For each assessment identified in the course content, generate the appropriate XML format.

For essay, project, and upload-type assessments — assignment XML:

<?xml version="1.0" encoding="UTF-8"?>
<assignment identifier="assign-{id}"
  xmlns="http://www.imsglobal.org/xsd/imscc_extensions/assignment">
  <title>{title}</title>
  <text texttype="text/html">{description HTML}</text>
  <gradable points_possible="{points}">{grading type}</gradable>
  <submission_formats>
    <format type="online_text_entry"/>
    <format type="online_upload"/>
  </submission_formats>
</assignment>

Include rubric criteria in the description HTML if a rubric file exists for the assessment. Format the rubric as an HTML table within the

<text>
element so it is visible to both instructors and students after import.

For quiz-type assessments — QTI XML (IMS Question & Test Interoperability):

<?xml version="1.0" encoding="UTF-8"?>
<questestinterop xmlns="http://www.imsglobal.org/xsd/ims_qtiasiv1p2">
  <assessment ident="quiz-{id}" title="{title}">
    <section ident="section-1">
      <!-- Multiple choice question -->
      <item ident="q-1" title="{question title}">
        <presentation>
          <material>
            <mattext texttype="text/html">{question text}</mattext>
          </material>
          <response_lid ident="resp-1" rcardinality="Single">
            <render_choice>
              <response_label ident="opt-a">
                <material>
                  <mattext>{option A text}</mattext>
                </material>
              </response_label>
              <response_label ident="opt-b">
                <material>
                  <mattext>{option B text}</mattext>
                </material>
              </response_label>
              <response_label ident="opt-c">
                <material>
                  <mattext>{option C text}</mattext>
                </material>
              </response_label>
              <response_label ident="opt-d">
                <material>
                  <mattext>{option D text}</mattext>
                </material>
              </response_label>
            </render_choice>
          </response_lid>
        </presentation>
        <resprocessing>
          <outcomes>
            <decvar maxvalue="1" minvalue="0" varname="SCORE" vartype="Decimal"/>
          </outcomes>
          <respcondition continue="No">
            <conditionvar>
              <varequal respident="resp-1">{correct option ident}</varequal>
            </conditionvar>
            <setvar action="Set" varname="SCORE">1</setvar>
            <displayfeedback feedbacktype="Response" linkrefid="correct"/>
          </respcondition>
        </resprocessing>
        <itemfeedback ident="correct">
          <flow_mat>
            <material>
              <mattext texttype="text/html">{feedback explaining why}</mattext>
            </material>
          </flow_mat>
        </itemfeedback>
      </item>
      <!-- Repeat for each question -->
    </section>
  </assessment>
</questestinterop>

Quiz generation notes:

  • Generate questions from the assessment description, rubric criteria, and learning objectives mapped in the alignment matrix
  • Include elaborated feedback for each question — not just "correct/incorrect" but explaining WHY the answer is right. Elaborated feedback produces significantly larger learning gains [Assessment-8] [T1]
  • Quiz question generation is best-effort. The instructional designer should review and edit questions in the LMS after import. Flag this clearly: "Quiz questions are auto-generated from your assessment specs. Review each question in your LMS before publishing to students."
  • Support these question types: multiple choice, true/false, short answer (essay type in QTI)

For discussion-type activities — discussion topic XML:

<?xml version="1.0" encoding="UTF-8"?>
<topic xmlns="http://www.imsglobal.org/xsd/imsccv1p3/imsdt_v1p3">
  <title>{title}</title>
  <text texttype="text/html">{discussion prompt HTML}</text>
</topic>

A5. Package as ZIP

Assemble all generated files into the Common Cartridge package:

EXPORT_DIR=$(mktemp -d)
mkdir -p "$EXPORT_DIR/modules"
mkdir -p "$EXPORT_DIR/assignments"
mkdir -p "$EXPORT_DIR/quizzes"
mkdir -p "$EXPORT_DIR/discussions"

# Write imsmanifest.xml to $EXPORT_DIR/imsmanifest.xml
# Write module HTML files to $EXPORT_DIR/modules/
# Write assignment XML files to $EXPORT_DIR/assignments/
# Write quiz QTI XML files to $EXPORT_DIR/quizzes/
# Write discussion XML files to $EXPORT_DIR/discussions/
# Write syllabus HTML to $EXPORT_DIR/syllabus.html

# Package
cd "$EXPORT_DIR"
zip -r course-export.imscc .
mv "$EXPORT_DIR/course-export.imscc" .idstack/course-export.imscc
rm -rf "$EXPORT_DIR"
echo "Export saved to .idstack/course-export.imscc"

Write each file individually using the Write tool, then package with Bash. This ensures every file is correctly formed before zipping.

A6. Verify the Package

# Verify it's a valid zip
file .idstack/course-export.imscc
# List contents
unzip -l .idstack/course-export.imscc
# Count items
echo "---"
echo "Module pages: $(unzip -l .idstack/course-export.imscc | grep 'modules/' | wc -l)"
echo "Assignments: $(unzip -l .idstack/course-export.imscc | grep 'assignments/' | wc -l)"
echo "Quizzes: $(unzip -l .idstack/course-export.imscc | grep 'quizzes/' | wc -l)"
echo "Discussions: $(unzip -l .idstack/course-export.imscc | grep 'discussions/' | wc -l)"

Present verification to the user:

## Common Cartridge Export Complete

File: .idstack/course-export.imscc
Size: {X} KB
Contents:
  - imsmanifest.xml
  - {N} module pages (.html)
  - {M} assignment documents (.xml)
  - {P} quiz documents (.xml, QTI format)
  - {Q} discussion topics (.xml)
  - Syllabus (.html)

Quiz questions are auto-generated from your assessment specs. Review each
question in your LMS before publishing to students.

To import into your LMS:
- **Canvas:** Settings > Import Course Content > Common Cartridge 1.x
- **Blackboard:** Course Management > Import > IMS Common Cartridge
- **Moodle:** Site Administration > Restore > Upload .imscc file
- **D2L/Brightspace:** Course Admin > Import/Export/Copy > Import Components

Path B: Canvas API Push

B1. Get Credentials

Ask the user for Canvas connection details. Use AskUserQuestion:

"I need three things to push your course to Canvas:

  1. Canvas URL — Your institution's Canvas address (e.g.,

    https://canvas.university.edu
    )

  2. Access token — Generate one in Canvas: Account > Settings > scroll to 'Approved Integrations' > New Access Token

  3. Course ID — The number in the URL when you open the course (e.g.,

    https://canvas.university.edu/courses/12345
    > course ID is
    12345
    ) Use an existing empty course shell, or create a new course first in Canvas.

Your token is used for this session only and is NEVER saved to any file."

B2. Validate Connection

RESPONSE=$(curl -s -w "\n%{http_code}" \
  -H "Authorization: Bearer $TOKEN" \
  "$BASE_URL/api/v1/users/self" 2>&1)
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | head -n -1)
echo "HTTP: $HTTP_CODE"
echo "$BODY" | head -5

Handle errors:

  • HTTP 401: "Token rejected. Make sure you copied the full token. In Canvas: Account > Settings > New Access Token."
  • HTTP 403: "Access denied. Your token may not have the right permissions for this course. You need at least Teacher or Designer role."
  • Network error: "Can't reach Canvas at that URL. Check the address and make sure it includes
    https://
    ."

Verify course access:

curl -s -w "\n%{http_code}" \
  -H "Authorization: Bearer $TOKEN" \
  "$BASE_URL/api/v1/courses/$COURSE_ID" 2>&1
  • HTTP 404: "Course not found. Check the course ID. You can find it in the URL when you open the course in Canvas."
  • HTTP 403: "You don't have access to this course. Ask your Canvas admin for Teacher or Designer role."

SECURITY RULE: The token variable is used ONLY in curl commands within this section. NEVER write the token to the manifest, to any file, or to conversation output. After all API calls are complete, the token is discarded.

B3. Read Course Content

Same as A1 — read all files in

.idstack/course-content/
and the manifest. Convert markdown content to HTML for the API calls (Canvas pages and assignments accept HTML in their body fields).

B4. Create Modules

For each module in the course content:

RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
  -H "Authorization: Bearer $TOKEN" \
  -d "module[name]={module title}&module[position]={position}" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/modules" 2>&1)
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | head -n -1)
echo "HTTP: $HTTP_CODE"
MODULE_ID=$(echo "$BODY" | grep -o '"id":[0-9]*' | head -1 | cut -d: -f2)
echo "MODULE_ID=$MODULE_ID"

Publish the module:

curl -s -X PUT \
  -H "Authorization: Bearer $TOKEN" \
  -d "module[published]=true" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/modules/$MODULE_ID"

Store the returned module ID for adding items in subsequent steps.

B5. Create Pages and Add to Modules

For each module page (content, syllabus):

# Create the page
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
  -H "Authorization: Bearer $TOKEN" \
  --data-urlencode "wiki_page[title]={page title}" \
  --data-urlencode "wiki_page[body]={html content}" \
  -d "wiki_page[published]=true" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/pages" 2>&1)
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | head -n -1)
PAGE_URL=$(echo "$BODY" | grep -o '"url":"[^"]*"' | head -1 | cut -d'"' -f4)
echo "HTTP: $HTTP_CODE"
echo "PAGE_URL=$PAGE_URL"

Then add the page to its module:

curl -s -X POST \
  -H "Authorization: Bearer $TOKEN" \
  -d "module_item[title]={page title}&module_item[type]=Page&module_item[page_url]=$PAGE_URL&module_item[published]=true" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/modules/$MODULE_ID/items"

B6. Create Assignments

For each assessment (essay, project, upload type):

RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
  -H "Authorization: Bearer $TOKEN" \
  --data-urlencode "assignment[name]={title}" \
  --data-urlencode "assignment[description]={html description with rubric}" \
  -d "assignment[points_possible]={points}" \
  -d "assignment[submission_types][]=online_text_entry" \
  -d "assignment[submission_types][]=online_upload" \
  -d "assignment[published]=false" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/assignments" 2>&1)
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | head -n -1)
ASSIGN_ID=$(echo "$BODY" | grep -o '"id":[0-9]*' | head -1 | cut -d: -f2)
echo "HTTP: $HTTP_CODE"
echo "ASSIGN_ID=$ASSIGN_ID"

Add the assignment to its module:

curl -s -X POST \
  -H "Authorization: Bearer $TOKEN" \
  -d "module_item[title]={title}&module_item[type]=Assignment&module_item[content_id]=$ASSIGN_ID&module_item[published]=true" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/modules/$MODULE_ID/items"

Note: Assignments are created unpublished by default. The instructional designer should review descriptions, rubrics, and due dates in Canvas before publishing.

B7. Create Discussion Topics

For each discussion activity:

RESPONSE=$(curl -s -w "\n%{http_code}" -X POST \
  -H "Authorization: Bearer $TOKEN" \
  --data-urlencode "title={title}" \
  --data-urlencode "message={html discussion prompt}" \
  -d "published=false" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/discussion_topics" 2>&1)
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | head -n -1)
TOPIC_ID=$(echo "$BODY" | grep -o '"id":[0-9]*' | head -1 | cut -d: -f2)
echo "HTTP: $HTTP_CODE"
echo "TOPIC_ID=$TOPIC_ID"

Add to its module:

curl -s -X POST \
  -H "Authorization: Bearer $TOKEN" \
  -d "module_item[title]={title}&module_item[type]=Discussion&module_item[content_id]=$TOPIC_ID&module_item[published]=true" \
  "$BASE_URL/api/v1/courses/$COURSE_ID/modules/$MODULE_ID/items"

B8. Error Handling

Each API call is wrapped in error checking. Handle these cases:

  • HTTP 401: Token expired or invalid. Stop and ask for a new token.
  • HTTP 403: Insufficient permissions. Report which item failed and why.
  • HTTP 404: Course or resource not found. Report the specific endpoint.
  • HTTP 422: Validation error. Report the error message from Canvas. Common causes: duplicate page titles, missing required fields.
  • HTTP 429: Rate limited. Wait 10 seconds, retry once. If still 429: "Canvas is rate-limiting requests. Waiting 30 seconds before continuing." Wait 30 seconds and retry. If still failing, log the item as failed and continue with the rest.
  • Timeout / network error: Log the item as failed, continue with the rest.

If any single API call fails, log it and continue with remaining items. Do NOT abort the entire export on a single failure. Present a summary at the end showing what succeeded and what failed.

B9. Present Summary

## Canvas Push Complete

Course: {title}
URL: {Canvas URL}/courses/{course_id}

| Item Type    | Created | Failed | Skipped |
|--------------|---------|--------|---------|
| Modules      | {N}     | {0}    | {0}     |
| Pages        | {M}     | {0}    | {0}     |
| Assignments  | {P}     | {0}    | {0}     |
| Discussions  | {Q}     | {0}    | {0}     |

Assignments and discussions are unpublished. Review them in Canvas before
publishing to students.

Open your course: {Canvas URL}/courses/{course_id}

If any items failed:

### Failed Items

| Item | Type | Error |
|------|------|-------|
| {name} | {type} | {error message} |

You can create these items manually in Canvas, or run `/course-export` again
to retry the failed items.

Path C: SCORM 1.2 Package Export

C1. Read Course Content Files

Read all files in

.idstack/course-content/
:

find .idstack/course-content/ -type f | sort

If no course content files exist, tell the user: "No course content found in

.idstack/course-content/
. Run
/course-builder
first to generate content."

C2. Create SCORM package structure

EXPORT_DIR=$(mktemp -d)
mkdir -p "$EXPORT_DIR/content"
echo "EXPORT_DIR=$EXPORT_DIR"

C3. Generate HTML content pages

For each module page in

.idstack/course-content/
, convert the Markdown content to a self-contained HTML page. Each page becomes a SCO (Shareable Content Object).

Write each HTML file to

$EXPORT_DIR/content/
:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>[Module Title]</title>
  <style>
    body { font-family: system-ui, sans-serif; max-width: 800px; margin: 2rem auto; padding: 0 1rem; line-height: 1.6; }
    h1 { color: #1a1a2e; }
    h2 { color: #16213e; margin-top: 2rem; }
    table { border-collapse: collapse; width: 100%; margin: 1rem 0; }
    th, td { border: 1px solid #ddd; padding: 0.5rem; text-align: left; }
    th { background: #f5f5f5; }
  </style>
</head>
<body>
  [Converted HTML content]
</body>
</html>

Name files as

module-01.html
,
module-02.html
, etc. matching module order.

C4. Generate imsmanifest.xml

Write

$EXPORT_DIR/imsmanifest.xml
following the SCORM 1.2 specification:

<?xml version="1.0" encoding="UTF-8"?>
<manifest identifier="idstack-course-[sanitized-title]"
  version="1.0"
  xmlns="http://www.imsproject.org/xsd/imscp_rootv1p1p2"
  xmlns:adlcp="http://www.adlnet.org/xsd/adlcp_rootv1p2"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.imsproject.org/xsd/imscp_rootv1p1p2 imscp_rootv1p1p2.xsd
                       http://www.adlnet.org/xsd/adlcp_rootv1p2 adlcp_rootv1p2.xsd">

  <metadata>
    <schema>ADL SCORM</schema>
    <schemaversion>1.2</schemaversion>
  </metadata>

  <organizations default="idstack-org">
    <organization identifier="idstack-org">
      <title>[Course Title]</title>
      <!-- One item per module -->
      <item identifier="item-01" identifierref="resource-01">
        <title>[Module 1 Title]</title>
      </item>
      <!-- ... more items ... -->
    </organization>
  </organizations>

  <resources>
    <!-- One resource per SCO -->
    <resource identifier="resource-01" type="webcontent" adlcp:scormtype="sco"
              href="content/module-01.html">
      <file href="content/module-01.html"/>
    </resource>
    <!-- ... more resources ... -->
  </resources>
</manifest>

Rules for generating the manifest:

  • Each module becomes one
    <item>
    pointing to one
    <resource>
  • Each resource is a SCO (type="webcontent", adlcp:scormtype="sco")
  • Identifiers must be unique within the manifest
  • Sanitize the course title for use in the manifest identifier (lowercase, hyphens, no special characters)
  • If modules have sub-modules, nest
    <item>
    elements accordingly

C5. Package as ZIP

cd "$EXPORT_DIR"
zip -r scorm-export.zip imsmanifest.xml content/
mv "$EXPORT_DIR/scorm-export.zip" .idstack/scorm-export.zip
echo "SCORM package saved to .idstack/scorm-export.zip"

C6. Verify package

file .idstack/scorm-export.zip
unzip -l .idstack/scorm-export.zip | head -20
echo "Total files: $(unzip -l .idstack/scorm-export.zip | tail -1)"

Verify:

  • imsmanifest.xml
    exists at the root of the ZIP
  • All
    <file href>
    references in the manifest have matching files in the ZIP
  • The ZIP is not empty

C7. Present export summary

## SCORM 1.2 Export Complete

File: .idstack/scorm-export.zip
Format: SCORM 1.2
SCOs: [count] (one per module)
Total files: [count]

### How to import

- **Any LMS:** Upload the .zip file through your LMS admin interface.
  Most LMS platforms auto-detect SCORM packages.
- **Canvas:** Settings > Import Course Content > SCORM package
- **Moodle:** Add Activity > SCORM package > Upload
- **Blackboard:** Content > Build Content > SCORM package
- **Corporate LMS (Cornerstone, SAP SuccessFactors, etc.):**
  Upload through your content management interface

### Limitations

- This SCORM package contains static HTML content. Interactive elements
  (drag-and-drop, branching scenarios) are not generated.
- SCORM API tracking (completion, score reporting) is not included.
  The LMS will mark the SCO as complete when the learner opens it.
- For richer interactivity, author in Articulate Rise or Storyline and
  use idstack's /course-quality-review and /red-team on the exported package.

C8. Cleanup

rm -rf "$EXPORT_DIR"

Manifest Write

After export completes (any path), update the project manifest with export metadata.

CRITICAL -- Manifest Integrity Rules:

  1. If a manifest already exists, READ it first with the Read tool.
  2. Modify ONLY the
    export_metadata
    section and the
    updated
    timestamp. Preserve all other sections unchanged —
    context
    ,
    needs_analysis
    ,
    learning_objectives
    ,
    quality_review
    ,
    import_metadata
    , and any other sections must remain exactly as they were.
  3. Before writing, verify the JSON is valid: matching braces, proper commas, quoted strings, no trailing commas.
  4. Update the top-level
    updated
    timestamp to reflect the current time.
  5. If this is a new manifest (unlikely for export, but possible), initialize ALL sections with empty/default values so downstream skills find the expected structure.

Readiness Info

Before starting the export, check the manifest for prior review data. If any of these sections exist, show a brief readiness summary as context (not a gate):

Export readiness:
  Quality review:       78/100 ✓ (reviewed 2026-04-08)
  Red-team audit:       2 critical, 3 warning
  Accessibility review: WCAG score 70/100, 1 AA violation

If a section doesn't exist, show: "Not reviewed — run /[skill-name] for analysis."

This is informational. Export proceeds regardless. The user can choose to address findings first or export now, no AskUserQuestion needed, just show the info and continue.

Write Export Metadata

Add or update the

export_metadata
field at the root level:

{
  "export_metadata": {
    "exported_at": "ISO-8601 timestamp",
    "format": "imscc|canvas-api|scorm",
    "destination": "file path (.idstack/course-export.imscc) or Canvas URL",
    "items_exported": {
      "modules": 0,
      "pages": 0,
      "assignments": 0,
      "quizzes": 0,
      "discussions": 0
    },
    "failed_items": [],
    "notes": "",
    "readiness_check": {
      "quality_score": 0,
      "quality_reviewed": true,
      "red_team_critical": 0,
      "red_team_reviewed": false,
      "accessibility_critical": 0,
      "accessibility_reviewed": false,
      "verdict": "export_clean|export_with_warnings"
    }
  }
}

The

readiness_check
section captures the state of prior reviews at export time. Populate it by reading the
quality_review
,
red_team_audit
, and
accessibility_review
sections from the manifest (if they exist). The
verdict
is:

  • export_clean
    : all reviewed, no critical findings
  • export_with_warnings
    : reviewed but has critical/warning findings
  • export_blocked
    : not used (export never blocks, advisory only)
  • Empty string if no reviews exist

Write the manifest, then confirm:

"Your export metadata has been saved to

.idstack/project.json
.

Your course has been exported. Verify that the LMS import preserved everything correctly. Pay particular attention to:

  • Quiz questions (auto-generated, may need editing)
  • Assignment rubrics (verify formatting survived the transfer)
  • Discussion prompts (check that instructions are clear)
  • Module sequencing (verify order matches your intended flow)"

Manifest Schema Reference

The complete manifest schema. Use this as the template when creating or validating the manifest. All fields shown below must exist in the JSON.

{
  "version": "1.2",
  "project_name": "",
  "created": "",
  "updated": "",
  "import_metadata": {
    "source": "",
    "imported_at": "",
    "source_lms": "",
    "items_imported": {
      "modules": 0,
      "objectives": 0,
      "assessments": 0,
      "activities": 0,
      "pages": 0
    },
    "quality_flags": 0
  },
  "export_metadata": {
    "exported_at": "",
    "format": "",
    "destination": "",
    "items_exported": {
      "modules": 0,
      "pages": 0,
      "assignments": 0,
      "quizzes": 0,
      "discussions": 0
    },
    "failed_items": [],
    "notes": "",
    "readiness_check": {
      "quality_score": 0,
      "quality_reviewed": false,
      "red_team_critical": 0,
      "red_team_reviewed": false,
      "accessibility_critical": 0,
      "accessibility_reviewed": false,
      "verdict": ""
    }
  },
  "context": {
    "modality": "",
    "timeline": "",
    "class_size": "",
    "institution_type": "",
    "available_tech": []
  },
  "needs_analysis": {
    "organizational_context": {
      "problem_statement": "",
      "stakeholders": [],
      "current_state": "",
      "desired_state": "",
      "performance_gap": ""
    },
    "task_analysis": {
      "job_tasks": [],
      "prerequisite_knowledge": [],
      "tools_and_resources": []
    },
    "learner_profile": {
      "prior_knowledge_level": "",
      "motivation_factors": [],
      "demographics": "",
      "access_constraints": [],
      "learning_preferences_note": "Learning styles are NOT used as a differentiation basis per evidence. Prior knowledge is the primary differentiator."
    },
    "training_justification": {
      "justified": true,
      "confidence": 0,
      "rationale": "",
      "alternatives_considered": []
    }
  },
  "learning_objectives": {
    "ilos": [],
    "alignment_matrix": {
      "ilo_to_activity": {},
      "ilo_to_assessment": {},
      "gaps": []
    },
    "expertise_reversal_flags": []
  },
  "course_content": {
    "generated_at": "",
    "modules": [],
    "syllabus": "",
    "assessments": [],
    "rubrics": [],
    "discussions": []
  },
  "quality_review": {
    "last_reviewed": "",
    "qm_standards": {
      "course_overview": {"status": "", "findings": []},
      "learning_objectives": {"status": "", "findings": []},
      "assessment": {"status": "", "findings": []},
      "instructional_materials": {"status": "", "findings": []},
      "learning_activities": {"status": "", "findings": []},
      "course_technology": {"status": "", "findings": []},
      "learner_support": {"status": "", "findings": []},
      "accessibility": {"status": "", "findings": []}
    },
    "coi_presence": {
      "teaching_presence": {"score": 0, "findings": []},
      "social_presence": {"score": 0, "findings": []},
      "cognitive_presence": {"score": 0, "findings": []}
    },
    "alignment_audit": {"findings": []},
    "overall_score": 0,
    "recommendations": []
  }
}

Feedback

Have feedback or a feature request? Share it here — no GitHub account needed.


Completion: Timeline Logging

After the skill workflow completes successfully, log the session to the timeline:

~/.claude/skills/idstack/bin/idstack-timeline-log '{"skill":"course-export","event":"completed"}'

Replace the JSON above with actual data from this session. Include skill-specific fields where available (scores, counts, flags). Log synchronously (no background &).

If you discover a non-obvious project-specific quirk during this session (LMS behavior, import format issue, course structure pattern), also log it as a learning:

~/.claude/skills/idstack/bin/idstack-learnings-log '{"skill":"course-export","type":"operational","key":"SHORT_KEY","insight":"DESCRIPTION","confidence":8,"source":"observed"}'