notebooklm

Complete API for Google NotebookLM - full programmatic access including features not in the web UI. Create notebooks, add sources, generate all artifact types, download in multiple formats. Activates on explicit /notebooklm or intent like "create a podcast about X"

install
source · Clone the upstream repo
git clone https://github.com/teng-lin/notebooklm-py
Claude Code · Install into ~/.claude/skills/
git clone --depth=1 https://github.com/teng-lin/notebooklm-py ~/.claude/skills/teng-lin-notebooklm-py-notebooklm
manifest: SKILL.md
source content

NotebookLM Automation

Complete programmatic access to Google NotebookLM—including capabilities not exposed in the web UI. Create notebooks, add sources (URLs, YouTube, PDFs, audio, video, images), chat with content, generate all artifact types, and download results in multiple formats.

Installation

From PyPI (Recommended):

pip install notebooklm-py

From GitHub (use latest release tag, NOT main branch):

# Get the latest release tag (using curl)
LATEST_TAG=$(curl -s https://api.github.com/repos/teng-lin/notebooklm-py/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
pip install "git+https://github.com/teng-lin/notebooklm-py@${LATEST_TAG}"

⚠️ DO NOT install from main branch (

pip install git+https://github.com/teng-lin/notebooklm-py
). The main branch may contain unreleased/unstable changes. Always use PyPI or a specific release tag, unless you are testing unreleased features.

Skill install methods:

  • notebooklm skill install
    installs this skill into the supported local agent directories managed by the CLI.
  • npx skills add teng-lin/notebooklm-py
    installs this skill from the GitHub repository into compatible agent skill directories.
  • If you are already reading this file inside an agent skill directory, the skill is already installed. You only need the Python package and authentication below.

CLI-managed install:

notebooklm skill install

Prerequisites

IMPORTANT: Before using any command, you MUST authenticate:

notebooklm login          # Opens browser for Google OAuth
notebooklm list           # Verify authentication works

If commands fail with authentication errors, re-run

notebooklm login
.

CI/CD, Multiple Accounts, and Parallel Agents

For automated environments, multiple accounts, or parallel agent workflows:

VariablePurpose
NOTEBOOKLM_HOME
Custom config directory (default:
~/.notebooklm
)
NOTEBOOKLM_PROFILE
Active profile name (default:
default
)
NOTEBOOKLM_AUTH_JSON
Inline auth JSON - no file writes needed

CI/CD setup: Set

NOTEBOOKLM_AUTH_JSON
from a secret containing your
storage_state.json
contents.

Multiple accounts: Use named profiles (

notebooklm profile create work
, then
notebooklm -p work login
). Alternatively, use different
NOTEBOOKLM_HOME
directories per account.

Parallel agents: The CLI stores notebook context in a shared file (

~/.notebooklm/context.json
). Multiple concurrent agents using
notebooklm use
can overwrite each other's context.

Solutions for parallel workflows:

  1. Always use explicit notebook ID (recommended): Pass
    -n <notebook_id>
    (for
    wait
    /
    download
    commands) or
    --notebook <notebook_id>
    (for others) instead of relying on
    use
  2. Per-agent isolation via profiles:
    export NOTEBOOKLM_PROFILE=agent-$ID
    (each profile gets its own context file)
  3. Per-agent isolation via home: Set unique
    NOTEBOOKLM_HOME
    per agent:
    export NOTEBOOKLM_HOME=/tmp/agent-$ID
  4. Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)

Agent Setup Verification

Before starting workflows, verify the CLI is ready:

  1. notebooklm status
    → Should show "Authenticated as: email@..."
  2. notebooklm list --json
    → Should return valid JSON (even if empty notebooks list)
  3. If either fails → Run
    notebooklm login

When This Skill Activates

Explicit: User says "/notebooklm", "use notebooklm", or mentions the tool by name

Intent detection: Recognize requests like:

  • "Create a podcast about [topic]"
  • "Summarize these URLs/documents"
  • "Generate a quiz from my research"
  • "Turn this into an audio overview"
  • "Create flashcards for studying"
  • "Generate a video explainer"
  • "Make an infographic"
  • "Create a mind map of the concepts"
  • "Download the quiz as markdown"
  • "Add these sources to NotebookLM"

Autonomy Rules

Run automatically (no confirmation):

  • notebooklm status
    - check context
  • notebooklm auth check
    - diagnose auth issues
  • notebooklm list
    - list notebooks
  • notebooklm source list
    - list sources
  • notebooklm artifact list
    - list artifacts
  • notebooklm language list
    - list supported languages
  • notebooklm language get
    - get current language
  • notebooklm language set
    - set language (global setting)
  • notebooklm artifact wait
    - wait for artifact completion (in subagent context)
  • notebooklm source wait
    - wait for source processing (in subagent context)
  • notebooklm research status
    - check research status
  • notebooklm research wait
    - wait for research (in subagent context)
  • notebooklm use <id>
    - set context (⚠️ SINGLE-AGENT ONLY - use
    -n
    flag in parallel workflows)
  • notebooklm create
    - create notebook
  • notebooklm ask "..."
    - chat queries (without
    --save-as-note
    )
  • notebooklm history
    - display conversation history (read-only)
  • notebooklm source add
    - add sources
  • notebooklm profile list
    - list profiles
  • notebooklm profile create
    - create profile
  • notebooklm profile switch
    - switch active profile
  • notebooklm doctor
    - check environment health

Ask before running:

  • notebooklm delete
    - destructive
  • notebooklm generate *
    - long-running, may fail
  • notebooklm download *
    - writes to filesystem
  • notebooklm artifact wait
    - long-running (when in main conversation)
  • notebooklm source wait
    - long-running (when in main conversation)
  • notebooklm research wait
    - long-running (when in main conversation)
  • notebooklm ask "..." --save-as-note
    - writes a note
  • notebooklm history --save
    - writes a note

Quick Reference

TaskCommand
Authenticate
notebooklm login
Diagnose auth issues
notebooklm auth check
Diagnose auth (full)
notebooklm auth check --test
List notebooks
notebooklm list
Create notebook
notebooklm create "Title"
Set context
notebooklm use <notebook_id>
Show context
notebooklm status
Add URL source
notebooklm source add "https://..."
Add file
notebooklm source add ./file.pdf
Add YouTube
notebooklm source add "https://youtube.com/..."
List sources
notebooklm source list
Delete source by ID
notebooklm source delete <source_id>
Delete source by exact title
notebooklm source delete-by-title "Exact Title"
Wait for source processing
notebooklm source wait <source_id>
Web research (fast)
notebooklm source add-research "query"
Web research (deep)
notebooklm source add-research "query" --mode deep --no-wait
Check research status
notebooklm research status
Wait for research
notebooklm research wait --import-all
Chat
notebooklm ask "question"
Chat (specific sources)
notebooklm ask "question" -s src_id1 -s src_id2
Chat (with references)
notebooklm ask "question" --json
Chat (save answer as note)
notebooklm ask "question" --save-as-note
Chat (save with title)
notebooklm ask "question" --save-as-note --note-title "Title"
Show conversation history
notebooklm history
Save all history as note
notebooklm history --save
Continue specific conversation
notebooklm ask "question" -c <conversation_id>
Save history with title
notebooklm history --save --note-title "My Research"
Get source fulltext
notebooklm source fulltext <source_id>
Get source guide
notebooklm source guide <source_id>
Generate podcast
notebooklm generate audio "instructions"
Generate podcast (JSON)
notebooklm generate audio --json
Generate podcast (specific sources)
notebooklm generate audio -s src_id1 -s src_id2
Generate video
notebooklm generate video "instructions"
Generate report
notebooklm generate report --format briefing-doc
Generate report (append instructions)
notebooklm generate report --format study-guide --append "Target audience: beginners"
Generate quiz
notebooklm generate quiz
Revise a slide
notebooklm generate revise-slide "prompt" --artifact <id> --slide 0
Check artifact status
notebooklm artifact list
Wait for completion
notebooklm artifact wait <artifact_id>
Download audio
notebooklm download audio ./output.mp3
Download video
notebooklm download video ./output.mp4
Download slide deck (PDF)
notebooklm download slide-deck ./slides.pdf
Download slide deck (PPTX)
notebooklm download slide-deck ./slides.pptx --format pptx
Download report
notebooklm download report ./report.md
Download mind map
notebooklm download mind-map ./map.json
Download data table
notebooklm download data-table ./data.csv
Download quiz
notebooklm download quiz quiz.json
Download quiz (markdown)
notebooklm download quiz --format markdown quiz.md
Download flashcards
notebooklm download flashcards cards.json
Download flashcards (markdown)
notebooklm download flashcards --format markdown cards.md
Delete notebook
notebooklm notebook delete <id>
List languages
notebooklm language list
Get language
notebooklm language get
Set language
notebooklm language set zh_Hans
List profiles
notebooklm profile list
Create profile
notebooklm profile create work
Switch profile
notebooklm profile switch work
Delete profile
notebooklm profile delete old
Rename profile
notebooklm profile rename old new
Use profile (one-off)
notebooklm -p work list
Health check
notebooklm doctor
Health check (auto-fix)
notebooklm doctor --fix

Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting

-n
shorthand:
artifact wait
,
source wait
,
research wait/status
,
download *
. Download commands also support
-a/--artifact
. Other commands use
--notebook
. For chat, use
-c <conversation_id>
to target a specific conversation.

Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for ID-based commands such as

use
,
source delete
, and
wait
. For exact source-title deletion, use
source delete-by-title "Title"
. For automation, prefer full UUIDs to avoid ambiguity.

Command Output Formats

Commands with

--json
return structured data for parsing:

Create notebook:

$ notebooklm create "Research" --json
{"id": "abc123de-...", "title": "Research"}

Add source:

$ notebooklm source add "https://example.com" --json
{"source_id": "def456...", "title": "Example", "status": "processing"}

Generate artifact:

$ notebooklm generate audio "Focus on key points" --json
{"task_id": "xyz789...", "status": "pending"}

Chat with references:

$ notebooklm ask "What is X?" --json
{"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}

Source fulltext (get indexed content):

$ notebooklm source fulltext <source_id> --json
{"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}

Understanding citations: The

cited_text
in references is often a snippet or section header, not the full quoted passage. The
start_char
/
end_char
positions reference NotebookLM's internal chunked index, not the raw fulltext. Use
SourceFulltext.find_citation_context()
to locate citations:

fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id)
matches = fulltext.find_citation_context(ref.cited_text)  # Returns list[(context, position)]
if matches:
    context, pos = matches[0]  # First match; check len(matches) > 1 for duplicates

Extract IDs: Parse the

id
,
source_id
, or
task_id
field from JSON output.

Generation Types

All generate commands support:

  • -s, --source
    to use specific source(s) instead of all sources
  • --language
    to set output language (defaults to configured language or 'en')
  • --json
    for machine-readable output (returns
    task_id
    and
    status
    )
  • --retry N
    to automatically retry on rate limits with exponential backoff
TypeCommandOptionsDownload
Podcast
generate audio
--format [deep-dive|brief|critique|debate]
,
--length [short|default|long]
.mp3
Video
generate video
--format [explainer|brief]
,
--style [auto|classic|whiteboard|kawaii|anime|watercolor|retro-print|heritage|paper-craft]
.mp4
Slide Deck
generate slide-deck
--format [detailed|presenter]
,
--length [default|short]
.pdf / .pptx
Slide Revision
generate revise-slide "prompt" --artifact <id> --slide N
--wait
,
--notebook
(re-downloads parent deck)
Infographic
generate infographic
--orientation [landscape|portrait|square]
,
--detail [concise|standard|detailed]
,
--style [auto|sketch-note|professional|bento-grid|editorial|instructional|bricks|clay|anime|kawaii|scientific]
.png
Report
generate report
--format [briefing-doc|study-guide|blog-post|custom]
,
--append "extra instructions"
.md
Mind Map
generate mind-map
(sync, instant).json
Data Table
generate data-table
description required.csv
Quiz
generate quiz
--difficulty [easy|medium|hard]
,
--quantity [fewer|standard|more]
.json/.md/.html
Flashcards
generate flashcards
--difficulty [easy|medium|hard]
,
--quantity [fewer|standard|more]
.json/.md/.html

Features Beyond the Web UI

These capabilities are available via CLI but not in NotebookLM's web interface:

FeatureCommandDescription
Batch downloads
download <type> --all
Download all artifacts of a type at once
Quiz/Flashcard export
download quiz --format json
Export as JSON, Markdown, or HTML (web UI only shows interactive view)
Mind map extraction
download mind-map
Export hierarchical JSON for visualization tools
Data table export
download data-table
Download structured tables as CSV
Slide deck as PPTX
download slide-deck --format pptx
Download slide deck as editable .pptx (web UI only offers PDF)
Slide revision
generate revise-slide "prompt" --artifact <id> --slide N
Modify individual slides with a natural-language prompt
Report template append
generate report --format study-guide --append "..."
Append custom instructions to built-in format templates without losing the format type
Source fulltext
source fulltext <id>
Retrieve the indexed text content of any source
Save chat to note
ask "..." --save-as-note
/
history --save
Save Q&A answers or conversation history as notebook notes
Programmatic sharing
share
commands
Manage sharing permissions without the UI

Common Workflows

Research to Podcast (Interactive)

Time: 5-10 minutes total

  1. notebooklm create "Research: [topic]"
    if fails: check auth with
    notebooklm login
  2. notebooklm source add
    for each URL/document — if one fails: log warning, continue with others
  3. Wait for sources:
    notebooklm source list --json
    until all status=READY — required before generation
  4. notebooklm generate audio "Focus on [specific angle]"
    (confirm when asked) — if rate limited: wait 5 min, retry once
  5. Note the artifact ID returned
  6. Check
    notebooklm artifact list
    later for status
  7. notebooklm download audio ./podcast.mp3
    when complete (confirm when asked)

Research to Podcast (Automated with Subagent)

Time: 5-10 minutes, but continues in background

When user wants full automation (generate and download when ready):

  1. Create notebook and add sources as usual
  2. Wait for sources to be ready (use
    source wait
    or check
    source list --json
    )
  3. Run
    notebooklm generate audio "..." --json
    → parse
    artifact_id
    from output
  4. Spawn a background agent using Task tool:
    Task(
      prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download.
              Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600
              Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}",
      subagent_type="general-purpose"
    )
    
  5. Main conversation continues while agent waits

Error handling in subagent:

  • If
    artifact wait
    returns exit code 2 (timeout): Report timeout, suggest checking
    artifact list
  • If download fails: Check if artifact status is COMPLETED first

Benefits: Non-blocking, user can do other work, automatic download on completion

Document Analysis

Time: 1-2 minutes

  1. notebooklm create "Analysis: [project]"
  2. notebooklm source add ./doc.pdf
    (or URLs)
  3. notebooklm ask "Summarize the key points"
  4. notebooklm ask "What are the main arguments?"
  5. Continue chatting as needed

Bulk Import

Time: Varies by source count

  1. notebooklm create "Collection: [name]"
  2. Add multiple sources:
    notebooklm source add "https://url1.com"
    notebooklm source add "https://url2.com"
    notebooklm source add ./local-file.pdf
    
  3. notebooklm source list
    to verify

Source limits: Varies by plan—Standard: 50, Plus: 100, Pro: 300, Ultra: 600 sources per notebook. See NotebookLM plans for details. The CLI does not enforce these limits; they are applied by your NotebookLM account. Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files, Markdown, Word docs, audio files, video files, images

Bulk Import with Source Waiting (Subagent Pattern)

Time: Varies by source count

When adding multiple sources and needing to wait for processing before chat/generation:

  1. Add sources with
    --json
    to capture IDs:
    notebooklm source add "https://url1.com" --json  # → {"source_id": "abc..."}
    notebooklm source add "https://url2.com" --json  # → {"source_id": "def..."}
    
  2. Spawn a background agent to wait for all sources:
    Task(
      prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready.
              For each: notebooklm source wait {id} -n {notebook_id} --timeout 120
              Report when all ready or if any fail.",
      subagent_type="general-purpose"
    )
    
  3. Main conversation continues while agent waits
  4. Once sources are ready, proceed with chat or generation

Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.

Deep Web Research (Subagent Pattern)

Time: 2-5 minutes, runs in background

Deep research finds and analyzes web sources on a topic:

  1. Create notebook:
    notebooklm create "Research: [topic]"
  2. Start deep research (non-blocking):
    notebooklm source add-research "topic query" --mode deep --no-wait
    
  3. Spawn a background agent to wait and import:
    Task(
      prompt="Wait for research in notebook {notebook_id} to complete and import sources.
              Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300
              Report how many sources were imported.",
      subagent_type="general-purpose"
    )
    
  4. Main conversation continues while agent waits
  5. When agent completes, sources are imported automatically

Alternative (blocking): For simple cases, omit

--no-wait
:

notebooklm source add-research "topic" --mode deep --import-all
# Blocks for up to 5 minutes

When to use each mode:

  • --mode fast
    : Specific topic, quick overview needed (5-10 sources, seconds)
  • --mode deep
    : Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)

Research sources:

  • --from web
    : Search the web (default)
  • --from drive
    : Search Google Drive

Output Style

Progress updates: Brief status for each step

  • "Creating notebook 'Research: AI'..."
  • "Adding source: https://example.com..."
  • "Starting audio generation... (task ID: abc123)"

Fire-and-forget for long operations:

  • Start generation, return artifact ID immediately
  • Do NOT poll or wait in main conversation - generation takes 5-45 minutes (see timing table)
  • User checks status manually, OR use subagent with
    artifact wait

JSON output: Use

--json
flag for machine-readable output:

notebooklm list --json
notebooklm auth check --json
notebooklm source list --json
notebooklm artifact list --json

JSON schemas (key fields):

notebooklm list --json
:

{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}

notebooklm auth check --json
:

{"checks": {"storage_exists": true, "json_valid": true, "cookies_present": true, "sid_cookie": true, "token_fetch": true}, "details": {"storage_path": "...", "auth_source": "file", "cookies_found": ["SID", "HSID", "..."], "cookie_domains": [".google.com"]}}

notebooklm source list --json
:

{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}

notebooklm artifact list --json
:

{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}

Status values:

  • Sources:
    processing
    ready
    (or
    error
    )
  • Artifacts:
    pending
    or
    in_progress
    completed
    (or
    unknown
    )

Error Handling

On failure, offer the user a choice:

  1. Retry the operation
  2. Skip and continue with something else
  3. Investigate the error

Error decision tree:

ErrorCauseAction
Auth/cookie errorSession expiredRun
notebooklm auth check
then
notebooklm login
"No notebook context"Context not setUse
-n <id>
or
--notebook <id>
flag (parallel), or
notebooklm use <id>
(single-agent)
"No result found for RPC ID"Rate limitingWait 5-10 min, retry
GENERATION_FAILED
Google rate limitWait and retry later
Download failsGeneration incompleteCheck
artifact list
for status
Invalid notebook/source IDWrong IDRun
notebooklm list
to verify
RPC protocol errorGoogle changed APIsMay need CLI update

Exit Codes

All commands use consistent exit codes:

CodeMeaningAction
0SuccessContinue
1Error (not found, processing failed)Check stderr, see Error Handling
2Timeout (wait commands only)Extend timeout or check status manually

Examples:

  • source wait
    returns 1 if source not found or processing failed
  • artifact wait
    returns 2 if timeout reached before completion
  • generate
    returns 1 if rate limited (check stderr for details)

Known Limitations

Rate limiting: Audio, video, quiz, flashcards, infographic, and slide deck generation may fail due to Google's rate limits. This is an API limitation, not a bug.

Reliable operations: These always work:

  • Notebooks (list, create, delete, rename)
  • Sources (add, list, delete)
  • Chat/queries
  • Mind-map, study-guide, report, data-table generation

Unreliable operations: These may fail with rate limiting:

  • Audio (podcast) generation
  • Video generation
  • Quiz and flashcard generation
  • Infographic and slide deck generation

Workaround: If generation fails:

  1. Check status:
    notebooklm artifact list
  2. Retry after 5-10 minutes
  3. Use the NotebookLM web UI as fallback

Processing times vary significantly. Use the subagent pattern for long operations:

OperationTypical timeSuggested timeout
Source processing30s - 10 min600s
Research (fast)30s - 2 min180s
Research (deep)15 - 30+ min1800s
Notesinstantn/a
Mind-mapinstant (sync)n/a
Quiz, flashcards5 - 15 min900s
Report, data-table5 - 15 min900s
Audio generation10 - 20 min1200s
Video generation15 - 45 min2700s

Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.

Language Configuration

Language setting controls the output language for generated artifacts (audio, video, etc.).

Important: Language is a GLOBAL setting that affects all notebooks in your account.

# List all 80+ supported languages with native names
notebooklm language list

# Show current language setting
notebooklm language get

# Set language for artifact generation
notebooklm language set zh_Hans  # Simplified Chinese
notebooklm language set ja       # Japanese
notebooklm language set en       # English (default)

Common language codes:

CodeLanguage
en
English
zh_Hans
中文(简体) - Simplified Chinese
zh_Hant
中文(繁體) - Traditional Chinese
ja
日本語 - Japanese
ko
한국어 - Korean
es
Español - Spanish
fr
Français - French
de
Deutsch - German
pt_BR
Português (Brasil)

Override per command: Use

--language
flag on generate commands:

notebooklm generate audio --language ja   # Japanese podcast
notebooklm generate video --language zh_Hans  # Chinese video

Offline mode: Use

--local
flag to skip server sync:

notebooklm language set zh_Hans --local  # Save locally only
notebooklm language get --local  # Read local config only

Troubleshooting

notebooklm --help              # Main commands
notebooklm auth check          # Diagnose auth issues
notebooklm auth check --test   # Full auth validation with network test
notebooklm notebook --help     # Notebook management
notebooklm source --help       # Source management
notebooklm research --help     # Research status/wait
notebooklm generate --help     # Content generation
notebooklm artifact --help     # Artifact management
notebooklm download --help     # Download content
notebooklm language --help     # Language settings

Diagnose auth:

notebooklm auth check
- shows cookie domains, storage path, validation status Re-authenticate:
notebooklm login
Check version:
notebooklm --version
Refresh a CLI-managed install:
notebooklm skill install