notebooklm
Complete API for Google NotebookLM - full programmatic access including features not in the web UI. Create notebooks, add sources, generate all artifact types, download in multiple formats. Activates on explicit /notebooklm or intent like "create a podcast about X"
git clone https://github.com/teng-lin/notebooklm-py
git clone --depth=1 https://github.com/teng-lin/notebooklm-py ~/.claude/skills/teng-lin-notebooklm-py-notebooklm
SKILL.mdNotebookLM Automation
Complete programmatic access to Google NotebookLM—including capabilities not exposed in the web UI. Create notebooks, add sources (URLs, YouTube, PDFs, audio, video, images), chat with content, generate all artifact types, and download results in multiple formats.
Installation
From PyPI (Recommended):
pip install notebooklm-py
From GitHub (use latest release tag, NOT main branch):
# Get the latest release tag (using curl) LATEST_TAG=$(curl -s https://api.github.com/repos/teng-lin/notebooklm-py/releases/latest | grep '"tag_name"' | cut -d'"' -f4) pip install "git+https://github.com/teng-lin/notebooklm-py@${LATEST_TAG}"
⚠️ DO NOT install from main branch (
pip install git+https://github.com/teng-lin/notebooklm-py). The main branch may contain unreleased/unstable changes. Always use PyPI or a specific release tag, unless you are testing unreleased features.
Skill install methods:
installs this skill into the supported local agent directories managed by the CLI.notebooklm skill install
installs this skill from the GitHub repository into compatible agent skill directories.npx skills add teng-lin/notebooklm-py- If you are already reading this file inside an agent skill directory, the skill is already installed. You only need the Python package and authentication below.
CLI-managed install:
notebooklm skill install
Prerequisites
IMPORTANT: Before using any command, you MUST authenticate:
notebooklm login # Opens browser for Google OAuth notebooklm list # Verify authentication works
If commands fail with authentication errors, re-run
notebooklm login.
CI/CD, Multiple Accounts, and Parallel Agents
For automated environments, multiple accounts, or parallel agent workflows:
| Variable | Purpose |
|---|---|
| Custom config directory (default: ) |
| Active profile name (default: ) |
| Inline auth JSON - no file writes needed |
CI/CD setup: Set
NOTEBOOKLM_AUTH_JSON from a secret containing your storage_state.json contents.
Multiple accounts: Use named profiles (
notebooklm profile create work, then notebooklm -p work login). Alternatively, use different NOTEBOOKLM_HOME directories per account.
Parallel agents: The CLI stores notebook context in a shared file (
~/.notebooklm/context.json). Multiple concurrent agents using notebooklm use can overwrite each other's context.
Solutions for parallel workflows:
- Always use explicit notebook ID (recommended): Pass
(for-n <notebook_id>
/wait
commands) ordownload
(for others) instead of relying on--notebook <notebook_id>use - Per-agent isolation via profiles:
(each profile gets its own context file)export NOTEBOOKLM_PROFILE=agent-$ID - Per-agent isolation via home: Set unique
per agent:NOTEBOOKLM_HOMEexport NOTEBOOKLM_HOME=/tmp/agent-$ID - Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)
Agent Setup Verification
Before starting workflows, verify the CLI is ready:
→ Should show "Authenticated as: email@..."notebooklm status
→ Should return valid JSON (even if empty notebooks list)notebooklm list --json- If either fails → Run
notebooklm login
When This Skill Activates
Explicit: User says "/notebooklm", "use notebooklm", or mentions the tool by name
Intent detection: Recognize requests like:
- "Create a podcast about [topic]"
- "Summarize these URLs/documents"
- "Generate a quiz from my research"
- "Turn this into an audio overview"
- "Create flashcards for studying"
- "Generate a video explainer"
- "Make an infographic"
- "Create a mind map of the concepts"
- "Download the quiz as markdown"
- "Add these sources to NotebookLM"
Autonomy Rules
Run automatically (no confirmation):
- check contextnotebooklm status
- diagnose auth issuesnotebooklm auth check
- list notebooksnotebooklm list
- list sourcesnotebooklm source list
- list artifactsnotebooklm artifact list
- list supported languagesnotebooklm language list
- get current languagenotebooklm language get
- set language (global setting)notebooklm language set
- wait for artifact completion (in subagent context)notebooklm artifact wait
- wait for source processing (in subagent context)notebooklm source wait
- check research statusnotebooklm research status
- wait for research (in subagent context)notebooklm research wait
- set context (⚠️ SINGLE-AGENT ONLY - usenotebooklm use <id>
flag in parallel workflows)-n
- create notebooknotebooklm create
- chat queries (withoutnotebooklm ask "..."
)--save-as-note
- display conversation history (read-only)notebooklm history
- add sourcesnotebooklm source add
- list profilesnotebooklm profile list
- create profilenotebooklm profile create
- switch active profilenotebooklm profile switch
- check environment healthnotebooklm doctor
Ask before running:
- destructivenotebooklm delete
- long-running, may failnotebooklm generate *
- writes to filesystemnotebooklm download *
- long-running (when in main conversation)notebooklm artifact wait
- long-running (when in main conversation)notebooklm source wait
- long-running (when in main conversation)notebooklm research wait
- writes a notenotebooklm ask "..." --save-as-note
- writes a notenotebooklm history --save
Quick Reference
| Task | Command |
|---|---|
| Authenticate | |
| Diagnose auth issues | |
| Diagnose auth (full) | |
| List notebooks | |
| Create notebook | |
| Set context | |
| Show context | |
| Add URL source | |
| Add file | |
| Add YouTube | |
| List sources | |
| Delete source by ID | |
| Delete source by exact title | |
| Wait for source processing | |
| Web research (fast) | |
| Web research (deep) | |
| Check research status | |
| Wait for research | |
| Chat | |
| Chat (specific sources) | |
| Chat (with references) | |
| Chat (save answer as note) | |
| Chat (save with title) | |
| Show conversation history | |
| Save all history as note | |
| Continue specific conversation | |
| Save history with title | |
| Get source fulltext | |
| Get source guide | |
| Generate podcast | |
| Generate podcast (JSON) | |
| Generate podcast (specific sources) | |
| Generate video | |
| Generate report | |
| Generate report (append instructions) | |
| Generate quiz | |
| Revise a slide | |
| Check artifact status | |
| Wait for completion | |
| Download audio | |
| Download video | |
| Download slide deck (PDF) | |
| Download slide deck (PPTX) | |
| Download report | |
| Download mind map | |
| Download data table | |
| Download quiz | |
| Download quiz (markdown) | |
| Download flashcards | |
| Download flashcards (markdown) | |
| Delete notebook | |
| List languages | |
| Get language | |
| Set language | |
| List profiles | |
| Create profile | |
| Switch profile | |
| Delete profile | |
| Rename profile | |
| Use profile (one-off) | |
| Health check | |
| Health check (auto-fix) | |
Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting
-n shorthand: artifact wait, source wait, research wait/status, download *. Download commands also support -a/--artifact. Other commands use --notebook. For chat, use -c <conversation_id> to target a specific conversation.
Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for ID-based commands such as
use, source delete, and wait. For exact source-title deletion, use source delete-by-title "Title". For automation, prefer full UUIDs to avoid ambiguity.
Command Output Formats
Commands with
--json return structured data for parsing:
Create notebook:
$ notebooklm create "Research" --json {"id": "abc123de-...", "title": "Research"}
Add source:
$ notebooklm source add "https://example.com" --json {"source_id": "def456...", "title": "Example", "status": "processing"}
Generate artifact:
$ notebooklm generate audio "Focus on key points" --json {"task_id": "xyz789...", "status": "pending"}
Chat with references:
$ notebooklm ask "What is X?" --json {"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}
Source fulltext (get indexed content):
$ notebooklm source fulltext <source_id> --json {"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}
Understanding citations: The
cited_text in references is often a snippet or section header, not the full quoted passage. The start_char/end_char positions reference NotebookLM's internal chunked index, not the raw fulltext. Use SourceFulltext.find_citation_context() to locate citations:
fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id) matches = fulltext.find_citation_context(ref.cited_text) # Returns list[(context, position)] if matches: context, pos = matches[0] # First match; check len(matches) > 1 for duplicates
Extract IDs: Parse the
id, source_id, or task_id field from JSON output.
Generation Types
All generate commands support:
to use specific source(s) instead of all sources-s, --source
to set output language (defaults to configured language or 'en')--language
for machine-readable output (returns--json
andtask_id
)status
to automatically retry on rate limits with exponential backoff--retry N
| Type | Command | Options | Download |
|---|---|---|---|
| Podcast | | , | .mp3 |
| Video | | , | .mp4 |
| Slide Deck | | , | .pdf / .pptx |
| Slide Revision | | , | (re-downloads parent deck) |
| Infographic | | , , | .png |
| Report | | , | .md |
| Mind Map | | (sync, instant) | .json |
| Data Table | | description required | .csv |
| Quiz | | , | .json/.md/.html |
| Flashcards | | , | .json/.md/.html |
Features Beyond the Web UI
These capabilities are available via CLI but not in NotebookLM's web interface:
| Feature | Command | Description |
|---|---|---|
| Batch downloads | | Download all artifacts of a type at once |
| Quiz/Flashcard export | | Export as JSON, Markdown, or HTML (web UI only shows interactive view) |
| Mind map extraction | | Export hierarchical JSON for visualization tools |
| Data table export | | Download structured tables as CSV |
| Slide deck as PPTX | | Download slide deck as editable .pptx (web UI only offers PDF) |
| Slide revision | | Modify individual slides with a natural-language prompt |
| Report template append | | Append custom instructions to built-in format templates without losing the format type |
| Source fulltext | | Retrieve the indexed text content of any source |
| Save chat to note | / | Save Q&A answers or conversation history as notebook notes |
| Programmatic sharing | commands | Manage sharing permissions without the UI |
Common Workflows
Research to Podcast (Interactive)
Time: 5-10 minutes total
— if fails: check auth withnotebooklm create "Research: [topic]"notebooklm login
for each URL/document — if one fails: log warning, continue with othersnotebooklm source add- Wait for sources:
until all status=READY — required before generationnotebooklm source list --json
(confirm when asked) — if rate limited: wait 5 min, retry oncenotebooklm generate audio "Focus on [specific angle]"- Note the artifact ID returned
- Check
later for statusnotebooklm artifact list
when complete (confirm when asked)notebooklm download audio ./podcast.mp3
Research to Podcast (Automated with Subagent)
Time: 5-10 minutes, but continues in background
When user wants full automation (generate and download when ready):
- Create notebook and add sources as usual
- Wait for sources to be ready (use
or checksource wait
)source list --json - Run
→ parsenotebooklm generate audio "..." --json
from outputartifact_id - Spawn a background agent using Task tool:
Task( prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download. Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600 Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}", subagent_type="general-purpose" ) - Main conversation continues while agent waits
Error handling in subagent:
- If
returns exit code 2 (timeout): Report timeout, suggest checkingartifact waitartifact list - If download fails: Check if artifact status is COMPLETED first
Benefits: Non-blocking, user can do other work, automatic download on completion
Document Analysis
Time: 1-2 minutes
notebooklm create "Analysis: [project]"
(or URLs)notebooklm source add ./doc.pdfnotebooklm ask "Summarize the key points"notebooklm ask "What are the main arguments?"- Continue chatting as needed
Bulk Import
Time: Varies by source count
notebooklm create "Collection: [name]"- Add multiple sources:
notebooklm source add "https://url1.com" notebooklm source add "https://url2.com" notebooklm source add ./local-file.pdf
to verifynotebooklm source list
Source limits: Varies by plan—Standard: 50, Plus: 100, Pro: 300, Ultra: 600 sources per notebook. See NotebookLM plans for details. The CLI does not enforce these limits; they are applied by your NotebookLM account. Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files, Markdown, Word docs, audio files, video files, images
Bulk Import with Source Waiting (Subagent Pattern)
Time: Varies by source count
When adding multiple sources and needing to wait for processing before chat/generation:
- Add sources with
to capture IDs:--jsonnotebooklm source add "https://url1.com" --json # → {"source_id": "abc..."} notebooklm source add "https://url2.com" --json # → {"source_id": "def..."} - Spawn a background agent to wait for all sources:
Task( prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready. For each: notebooklm source wait {id} -n {notebook_id} --timeout 120 Report when all ready or if any fail.", subagent_type="general-purpose" ) - Main conversation continues while agent waits
- Once sources are ready, proceed with chat or generation
Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.
Deep Web Research (Subagent Pattern)
Time: 2-5 minutes, runs in background
Deep research finds and analyzes web sources on a topic:
- Create notebook:
notebooklm create "Research: [topic]" - Start deep research (non-blocking):
notebooklm source add-research "topic query" --mode deep --no-wait - Spawn a background agent to wait and import:
Task( prompt="Wait for research in notebook {notebook_id} to complete and import sources. Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300 Report how many sources were imported.", subagent_type="general-purpose" ) - Main conversation continues while agent waits
- When agent completes, sources are imported automatically
Alternative (blocking): For simple cases, omit
--no-wait:
notebooklm source add-research "topic" --mode deep --import-all # Blocks for up to 5 minutes
When to use each mode:
: Specific topic, quick overview needed (5-10 sources, seconds)--mode fast
: Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)--mode deep
Research sources:
: Search the web (default)--from web
: Search Google Drive--from drive
Output Style
Progress updates: Brief status for each step
- "Creating notebook 'Research: AI'..."
- "Adding source: https://example.com..."
- "Starting audio generation... (task ID: abc123)"
Fire-and-forget for long operations:
- Start generation, return artifact ID immediately
- Do NOT poll or wait in main conversation - generation takes 5-45 minutes (see timing table)
- User checks status manually, OR use subagent with
artifact wait
JSON output: Use
--json flag for machine-readable output:
notebooklm list --json notebooklm auth check --json notebooklm source list --json notebooklm artifact list --json
JSON schemas (key fields):
notebooklm list --json:
{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}
notebooklm auth check --json:
{"checks": {"storage_exists": true, "json_valid": true, "cookies_present": true, "sid_cookie": true, "token_fetch": true}, "details": {"storage_path": "...", "auth_source": "file", "cookies_found": ["SID", "HSID", "..."], "cookie_domains": [".google.com"]}}
notebooklm source list --json:
{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}
notebooklm artifact list --json:
{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}
Status values:
- Sources:
→processing
(orready
)error - Artifacts:
orpending
→in_progress
(orcompleted
)unknown
Error Handling
On failure, offer the user a choice:
- Retry the operation
- Skip and continue with something else
- Investigate the error
Error decision tree:
| Error | Cause | Action |
|---|---|---|
| Auth/cookie error | Session expired | Run then |
| "No notebook context" | Context not set | Use or flag (parallel), or (single-agent) |
| "No result found for RPC ID" | Rate limiting | Wait 5-10 min, retry |
| Google rate limit | Wait and retry later |
| Download fails | Generation incomplete | Check for status |
| Invalid notebook/source ID | Wrong ID | Run to verify |
| RPC protocol error | Google changed APIs | May need CLI update |
Exit Codes
All commands use consistent exit codes:
| Code | Meaning | Action |
|---|---|---|
| 0 | Success | Continue |
| 1 | Error (not found, processing failed) | Check stderr, see Error Handling |
| 2 | Timeout (wait commands only) | Extend timeout or check status manually |
Examples:
returns 1 if source not found or processing failedsource wait
returns 2 if timeout reached before completionartifact wait
returns 1 if rate limited (check stderr for details)generate
Known Limitations
Rate limiting: Audio, video, quiz, flashcards, infographic, and slide deck generation may fail due to Google's rate limits. This is an API limitation, not a bug.
Reliable operations: These always work:
- Notebooks (list, create, delete, rename)
- Sources (add, list, delete)
- Chat/queries
- Mind-map, study-guide, report, data-table generation
Unreliable operations: These may fail with rate limiting:
- Audio (podcast) generation
- Video generation
- Quiz and flashcard generation
- Infographic and slide deck generation
Workaround: If generation fails:
- Check status:
notebooklm artifact list - Retry after 5-10 minutes
- Use the NotebookLM web UI as fallback
Processing times vary significantly. Use the subagent pattern for long operations:
| Operation | Typical time | Suggested timeout |
|---|---|---|
| Source processing | 30s - 10 min | 600s |
| Research (fast) | 30s - 2 min | 180s |
| Research (deep) | 15 - 30+ min | 1800s |
| Notes | instant | n/a |
| Mind-map | instant (sync) | n/a |
| Quiz, flashcards | 5 - 15 min | 900s |
| Report, data-table | 5 - 15 min | 900s |
| Audio generation | 10 - 20 min | 1200s |
| Video generation | 15 - 45 min | 2700s |
Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.
Language Configuration
Language setting controls the output language for generated artifacts (audio, video, etc.).
Important: Language is a GLOBAL setting that affects all notebooks in your account.
# List all 80+ supported languages with native names notebooklm language list # Show current language setting notebooklm language get # Set language for artifact generation notebooklm language set zh_Hans # Simplified Chinese notebooklm language set ja # Japanese notebooklm language set en # English (default)
Common language codes:
| Code | Language |
|---|---|
| English |
| 中文(简体) - Simplified Chinese |
| 中文(繁體) - Traditional Chinese |
| 日本語 - Japanese |
| 한국어 - Korean |
| Español - Spanish |
| Français - French |
| Deutsch - German |
| Português (Brasil) |
Override per command: Use
--language flag on generate commands:
notebooklm generate audio --language ja # Japanese podcast notebooklm generate video --language zh_Hans # Chinese video
Offline mode: Use
--local flag to skip server sync:
notebooklm language set zh_Hans --local # Save locally only notebooklm language get --local # Read local config only
Troubleshooting
notebooklm --help # Main commands notebooklm auth check # Diagnose auth issues notebooklm auth check --test # Full auth validation with network test notebooklm notebook --help # Notebook management notebooklm source --help # Source management notebooklm research --help # Research status/wait notebooklm generate --help # Content generation notebooklm artifact --help # Artifact management notebooklm download --help # Download content notebooklm language --help # Language settings
Diagnose auth:
notebooklm auth check - shows cookie domains, storage path, validation status
Re-authenticate: notebooklm login
Check version: notebooklm --version
Refresh a CLI-managed install: notebooklm skill install