Marketplace notebooklm
Automate Google NotebookLM - create notebooks, add sources, generate podcasts/videos/quizzes, download artifacts. Activates on explicit /notebooklm or intent like "create a podcast about X"
git clone https://github.com/aiskillstore/marketplace
T=$(mktemp -d) && git clone --depth=1 https://github.com/aiskillstore/marketplace "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/teng-lin/notebooklm" ~/.claude/skills/aiskillstore-marketplace-notebooklm-f1c866 && rm -rf "$T"
skills/teng-lin/notebooklm/SKILL.mdNotebookLM Automation
Automate Google NotebookLM: create notebooks, add sources, chat with content, generate artifacts (podcasts, videos, quizzes), and download results.
Prerequisites
IMPORTANT: Before using any command, you MUST authenticate:
notebooklm login # Opens browser for Google OAuth notebooklm list # Verify authentication works
If commands fail with authentication errors, re-run
notebooklm login.
CI/CD, Multiple Accounts, and Parallel Agents
For automated environments, multiple accounts, or parallel agent workflows:
| Variable | Purpose |
|---|---|
| Custom config directory (default: ) |
| Inline auth JSON - no file writes needed |
CI/CD setup: Set
NOTEBOOKLM_AUTH_JSON from a secret containing your storage_state.json contents.
Multiple accounts: Use different
NOTEBOOKLM_HOME directories per account.
Parallel agents: The CLI stores notebook context in a shared file (
~/.notebooklm/context.json). Multiple concurrent agents using notebooklm use can overwrite each other's context.
Solutions for parallel workflows:
- Always use explicit notebook ID (recommended): Pass
(for-n <notebook_id>
/wait
commands) ordownload
(for others) instead of relying on--notebook <notebook_id>use - Per-agent isolation: Set unique
per agent:NOTEBOOKLM_HOMEexport NOTEBOOKLM_HOME=/tmp/agent-$ID - Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)
Agent Setup Verification
Before starting workflows, verify the CLI is ready:
→ Should show "Authenticated as: email@..."notebooklm status
→ Should return valid JSON (even if empty notebooks list)notebooklm list --json- If either fails → Run
notebooklm login
When This Skill Activates
Explicit: User says "/notebooklm", "use notebooklm", or mentions the tool by name
Intent detection: Recognize requests like:
- "Create a podcast about [topic]"
- "Summarize these URLs/documents"
- "Generate a quiz from my research"
- "Turn this into an audio overview"
- "Add these sources to NotebookLM"
Autonomy Rules
Run automatically (no confirmation):
- check contextnotebooklm status
- list notebooksnotebooklm list
- list sourcesnotebooklm source list
- list artifactsnotebooklm artifact list
- wait for artifact completion (in subagent context)notebooklm artifact wait
- wait for source processing (in subagent context)notebooklm source wait
- check research statusnotebooklm research status
- wait for research (in subagent context)notebooklm research wait
- set context (⚠️ SINGLE-AGENT ONLY - usenotebooklm use <id>
flag in parallel workflows)-n
- create notebooknotebooklm create
- chat queriesnotebooklm ask "..."
- add sourcesnotebooklm source add
Ask before running:
- destructivenotebooklm delete
- long-running, may failnotebooklm generate *
- writes to filesystemnotebooklm download *
- long-running (when in main conversation)notebooklm artifact wait
- long-running (when in main conversation)notebooklm source wait
- long-running (when in main conversation)notebooklm research wait
Quick Reference
| Task | Command |
|---|---|
| Authenticate | |
| List notebooks | |
| Create notebook | |
| Set context | |
| Show context | |
| Add URL source | |
| Add file | |
| Add YouTube | |
| List sources | |
| Wait for source processing | |
| Web research (fast) | |
| Web research (deep) | |
| Check research status | |
| Wait for research | |
| Chat | |
| Chat (new conversation) | |
| Chat (specific sources) | |
| Chat (with references) | |
| Get source fulltext | |
| Get source guide | |
| Generate podcast | |
| Generate podcast (JSON) | |
| Generate podcast (specific sources) | |
| Generate video | |
| Generate quiz | |
| Check artifact status | |
| Wait for completion | |
| Download audio | |
| Download video | |
| Delete notebook | |
Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting
-n shorthand: artifact wait, source wait, research wait/status, download *. Download commands also support -a/--artifact. Other commands use --notebook. For chat, use --new to start fresh conversations (avoids conversation ID conflicts).
Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for:
use, delete, wait commands. For automation, prefer full UUIDs to avoid ambiguity.
Command Output Formats
Commands with
--json return structured data for parsing:
Create notebook:
$ notebooklm create "Research" --json {"id": "abc123de-...", "title": "Research"}
Add source:
$ notebooklm source add "https://example.com" --json {"source_id": "def456...", "title": "Example", "status": "processing"}
Generate artifact:
$ notebooklm generate audio "Focus on key points" --json {"task_id": "xyz789...", "status": "pending"}
Chat with references:
$ notebooklm ask "What is X?" --json {"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}
Source fulltext (get indexed content):
$ notebooklm source fulltext <source_id> --json {"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}
Understanding citations: The
cited_text in references is often a snippet or section header, not the full quoted passage. The start_char/end_char positions reference NotebookLM's internal chunked index, not the raw fulltext. Use SourceFulltext.find_citation_context() to locate citations:
fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id) matches = fulltext.find_citation_context(ref.cited_text) # Returns list[(context, position)] if matches: context, pos = matches[0] # First match; check len(matches) > 1 for duplicates
Extract IDs: Parse the
id, source_id, or task_id field from JSON output.
Generation Types
All generate commands support:
to use specific source(s) instead of all sources-s, --source
for machine-readable output (returns--json
andtask_id
)status
| Type | Command | Downloadable |
|---|---|---|
| Podcast | | Yes (.mp3) |
| Video | | Yes (.mp4) |
| Slides | | Yes (.pdf) |
| Infographic | | Yes (.png) |
| Quiz | | No (view in UI) |
| Flashcards | | No (view in UI) |
| Mind Map | | No (view in UI) |
| Data Table | | No (export to Sheets) |
| Report | | No (export to Docs) |
Common Workflows
Research to Podcast (Interactive)
Time: 5-10 minutes total
— if fails: check auth withnotebooklm create "Research: [topic]"notebooklm login
for each URL/document — if one fails: log warning, continue with othersnotebooklm source add- Wait for sources:
until all status=READY — required before generationnotebooklm source list --json
(confirm when asked) — if rate limited: wait 5 min, retry oncenotebooklm generate audio "Focus on [specific angle]"- Note the artifact ID returned
- Check
later for statusnotebooklm artifact list
when complete (confirm when asked)notebooklm download audio ./podcast.mp3
Research to Podcast (Automated with Subagent)
Time: 5-10 minutes, but continues in background
When user wants full automation (generate and download when ready):
- Create notebook and add sources as usual
- Wait for sources to be ready (use
or checksource wait
)source list --json - Run
→ parsenotebooklm generate audio "..." --json
from outputartifact_id - Spawn a background agent using Task tool:
Task( prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download. Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600 Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}", subagent_type="general-purpose" ) - Main conversation continues while agent waits
Error handling in subagent:
- If
returns exit code 2 (timeout): Report timeout, suggest checkingartifact waitartifact list - If download fails: Check if artifact status is COMPLETED first
Benefits: Non-blocking, user can do other work, automatic download on completion
Document Analysis
Time: 1-2 minutes
notebooklm create "Analysis: [project]"
(or URLs)notebooklm source add ./doc.pdfnotebooklm ask "Summarize the key points"notebooklm ask "What are the main arguments?"- Continue chatting as needed
Bulk Import
Time: Varies by source count
notebooklm create "Collection: [name]"- Add multiple sources:
notebooklm source add "https://url1.com" notebooklm source add "https://url2.com" notebooklm source add ./local-file.pdf
to verifynotebooklm source list
Source limits: Max 50 sources per notebook Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files
Bulk Import with Source Waiting (Subagent Pattern)
Time: Varies by source count
When adding multiple sources and needing to wait for processing before chat/generation:
- Add sources with
to capture IDs:--jsonnotebooklm source add "https://url1.com" --json # → {"source_id": "abc..."} notebooklm source add "https://url2.com" --json # → {"source_id": "def..."} - Spawn a background agent to wait for all sources:
Task( prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready. For each: notebooklm source wait {id} -n {notebook_id} --timeout 120 Report when all ready or if any fail.", subagent_type="general-purpose" ) - Main conversation continues while agent waits
- Once sources are ready, proceed with chat or generation
Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.
Deep Web Research (Subagent Pattern)
Time: 2-5 minutes, runs in background
Deep research finds and analyzes web sources on a topic:
- Create notebook:
notebooklm create "Research: [topic]" - Start deep research (non-blocking):
notebooklm source add-research "topic query" --mode deep --no-wait - Spawn a background agent to wait and import:
Task( prompt="Wait for research in notebook {notebook_id} to complete and import sources. Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300 Report how many sources were imported.", subagent_type="general-purpose" ) - Main conversation continues while agent waits
- When agent completes, sources are imported automatically
Alternative (blocking): For simple cases, omit
--no-wait:
notebooklm source add-research "topic" --mode deep --import-all # Blocks for up to 5 minutes
When to use each mode:
: Specific topic, quick overview needed (5-10 sources, seconds)--mode fast
: Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)--mode deep
Research sources:
: Search the web (default)--from web
: Search Google Drive--from drive
Output Style
Progress updates: Brief status for each step
- "Creating notebook 'Research: AI'..."
- "Adding source: https://example.com..."
- "Starting audio generation... (task ID: abc123)"
Fire-and-forget for long operations:
- Start generation, return artifact ID immediately
- Do NOT poll or wait in main conversation - generation takes 5-45 minutes (see timing table)
- User checks status manually, OR use subagent with
artifact wait
JSON output: Use
--json flag for machine-readable output:
notebooklm list --json notebooklm source list --json notebooklm artifact list --json
JSON schemas (key fields):
notebooklm list --json:
{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}
notebooklm source list --json:
{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}
notebooklm artifact list --json:
{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}
Status values:
- Sources:
→processing
(orready
)error - Artifacts:
orpending
→in_progress
(orcompleted
)unknown
Error Handling
On failure, offer the user a choice:
- Retry the operation
- Skip and continue with something else
- Investigate the error
Error decision tree:
| Error | Cause | Action |
|---|---|---|
| Auth/cookie error | Session expired | Run |
| "No notebook context" | Context not set | Use or flag (parallel), or (single-agent) |
| "No result found for RPC ID" | Rate limiting | Wait 5-10 min, retry |
| Google rate limit | Wait and retry later |
| Download fails | Generation incomplete | Check for status |
| Invalid notebook/source ID | Wrong ID | Run to verify |
| RPC protocol error | Google changed APIs | May need CLI update |
Exit Codes
All commands use consistent exit codes:
| Code | Meaning | Action |
|---|---|---|
| 0 | Success | Continue |
| 1 | Error (not found, processing failed) | Check stderr, see Error Handling |
| 2 | Timeout (wait commands only) | Extend timeout or check status manually |
Examples:
returns 1 if source not found or processing failedsource wait
returns 2 if timeout reached before completionartifact wait
returns 1 if rate limited (check stderr for details)generate
Known Limitations
Rate limiting: Audio, video, quiz, flashcards, infographic, and slides generation may fail due to Google's rate limits. This is an API limitation, not a bug.
Reliable operations: These always work:
- Notebooks (list, create, delete, rename)
- Sources (add, list, delete)
- Chat/queries
- Mind-map, study-guide, FAQ, data-table generation
Unreliable operations: These may fail with rate limiting:
- Audio (podcast) generation
- Video generation
- Quiz and flashcard generation
- Infographic and slides generation
Workaround: If generation fails:
- Check status:
notebooklm artifact list - Retry after 5-10 minutes
- Use the NotebookLM web UI as fallback
Processing times vary significantly. Use the subagent pattern for long operations:
| Operation | Typical time | Suggested timeout |
|---|---|---|
| Source processing | 30s - 10 min | 600s |
| Research (fast) | 30s - 2 min | 180s |
| Research (deep) | 15 - 30+ min | 1800s |
| Notes | instant | n/a |
| Mind-map | instant (sync) | n/a |
| Quiz, flashcards | 5 - 15 min | 900s |
| Report, data-table | 5 - 15 min | 900s |
| Audio generation | 10 - 20 min | 1200s |
| Video generation | 15 - 45 min | 2700s |
Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.
Troubleshooting
notebooklm --help # Main commands notebooklm notebook --help # Notebook management notebooklm source --help # Source management notebooklm research --help # Research status/wait notebooklm generate --help # Content generation notebooklm artifact --help # Artifact management notebooklm download --help # Download content
Re-authenticate:
notebooklm login
Check version: notebooklm --version
Update skill: notebooklm skill install