Claude-code-plugins-plus granola-observability

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/granola-pack/skills/granola-observability" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-granola-observability && rm -rf "$T"
manifest: plugins/saas-packs/granola-pack/skills/granola-observability/SKILL.md
source content

Granola Observability

Overview

Monitor Granola usage, track meeting patterns, and build analytics dashboards. Granola Enterprise includes a usage analytics dashboard. For deeper insights, build custom pipelines using Zapier to stream meeting metadata to BigQuery, Metabase, or other analytics platforms.

Prerequisites

  • Granola Business or Enterprise plan
  • Admin access for organization-level analytics
  • Optional: BigQuery/Metabase for custom dashboards, Zapier for data pipeline

Instructions

Step 1 — Built-in Analytics (Enterprise)

Access the analytics dashboard at Settings > Analytics (Enterprise plan):

MetricWhat It Shows
Total meetings capturedMeeting volume over time
Active usersUsers who recorded meetings this period
Hours capturedTotal meeting hours transcribed
Notes sharedHow often notes are distributed
Action items createdExtracted action items across org
Adoption rateActive users / total licensed seats

Step 2 — Define Key Metrics

Track these metrics to measure Granola's impact:

CategoryMetricTargetFormula
AdoptionActivation rate>80%Users with 1+ meeting / total seats
AdoptionWeekly active users>70%Users recording this week / total seats
QualityCapture rate>70%Meetings captured / total calendar meetings
QualityShare rate>50%Notes shared / notes created
EfficiencyTime saved>10 min/meetingSurvey: manual notes time - Granola time
EfficiencyAction completion>80%Actions completed / actions created
HealthProcessing success>99%Successful enhancements / total attempts
HealthIntegration uptime>99%Successful syncs / total sync attempts

Step 3 — Build a Custom Analytics Pipeline

Stream meeting metadata from Granola to a data warehouse via Zapier:

# Zapier: Granola → BigQuery pipeline
Trigger: Granola — Note Added to Folder ("All Meetings")

Step 1 — Code by Zapier (extract metadata):
  const data = {
    meeting_id: inputData.title + '_' + inputData.calendar_event_datetime,
    title: inputData.title,
    date: inputData.calendar_event_datetime,
    creator: inputData.creator_email,
    attendee_count: JSON.parse(inputData.attendees || '[]').length,
    has_action_items: inputData.note_content.includes('- [ ]'),
    action_item_count: (inputData.note_content.match(/- \[ \]/g) || []).length,
    has_decisions: inputData.note_content.includes('## Decision') ||
                   inputData.note_content.includes('## Key Decision'),
    word_count: inputData.note_content.split(/\s+/).length,
    is_external: JSON.parse(inputData.attendees || '[]')
      .some(a => !a.email?.endsWith('@company.com')),
    workspace: inputData.folder || 'unknown',
    captured_at: new Date().toISOString(),
  };
  output = [data];

Step 2 — BigQuery: Insert Row
  Dataset: meeting_analytics
  Table: granola_meetings
  Row: {{metadata from step 1}}

BigQuery schema:

CREATE TABLE meeting_analytics.granola_meetings (
  meeting_id STRING NOT NULL,
  title STRING,
  date TIMESTAMP,
  creator STRING,
  attendee_count INT64,
  has_action_items BOOL,
  action_item_count INT64,
  has_decisions BOOL,
  word_count INT64,
  is_external BOOL,
  workspace STRING,
  captured_at TIMESTAMP
);

Step 4 — Analytics Queries

-- Weekly meeting volume by workspace
SELECT
  workspace,
  DATE_TRUNC(date, WEEK) AS week,
  COUNT(*) AS meeting_count,
  SUM(action_item_count) AS total_actions,
  AVG(attendee_count) AS avg_attendees
FROM meeting_analytics.granola_meetings
WHERE date >= DATE_SUB(CURRENT_DATE(), INTERVAL 12 WEEK)
GROUP BY workspace, week
ORDER BY week DESC, workspace;

-- Adoption: active users per week
SELECT
  DATE_TRUNC(date, WEEK) AS week,
  COUNT(DISTINCT creator) AS active_users
FROM meeting_analytics.granola_meetings
WHERE date >= DATE_SUB(CURRENT_DATE(), INTERVAL 8 WEEK)
GROUP BY week
ORDER BY week DESC;

-- Meeting efficiency score (has action items + decisions + < 8 attendees)
SELECT
  title,
  date,
  CASE
    WHEN has_action_items AND has_decisions AND attendee_count <= 8 THEN 'Efficient'
    WHEN has_action_items OR has_decisions THEN 'Partially Efficient'
    ELSE 'Low Efficiency'
  END AS efficiency_rating
FROM meeting_analytics.granola_meetings
ORDER BY date DESC
LIMIT 50;

-- External vs internal meeting ratio
SELECT
  DATE_TRUNC(date, MONTH) AS month,
  COUNTIF(is_external) AS external_meetings,
  COUNTIF(NOT is_external) AS internal_meetings,
  ROUND(COUNTIF(is_external) * 100.0 / COUNT(*), 1) AS external_pct
FROM meeting_analytics.granola_meetings
GROUP BY month
ORDER BY month DESC;

Step 5 — Automated Reporting

Weekly Slack digest (via Zapier Schedule):

Trigger: Schedule by Zapier — Every Friday at 5 PM

Step 1 — BigQuery: Run Query
  Query: "SELECT COUNT(*) as meetings, SUM(action_item_count) as actions,
          COUNT(DISTINCT creator) as active_users
          FROM meeting_analytics.granola_meetings
          WHERE date >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY)"

Step 2 — Slack: Send Message to #leadership
  Message: |
    :bar_chart: *Weekly Granola Report*

    *This Week:*
    - Meetings captured: {{meetings}}
    - Action items created: {{actions}}
    - Active users: {{active_users}}

    [View full dashboard →]

Step 6 — Health Monitoring and Alerts

Set up alerts for operational issues:

AlertConditionChannel
Low adoptionActive users <50% of seats (weekly)Slack #it-alerts
Processing failures>5% enhancement failures (daily)PagerDuty
Integration outageSlack/Notion/CRM sync failures >3 (hourly)Slack #it-alerts
Zero meetings capturedNo meetings for any workspace (daily)Email to workspace admin

Status monitoring:

# Check Granola service status
curl -s https://status.granola.ai/api/v2/status.json | python3 -c "
import json, sys
data = json.load(sys.stdin)
status = data.get('status', {}).get('description', 'Unknown')
print(f'Granola Status: {status}')
"

Output

  • Built-in analytics reviewed and baselines established
  • Custom analytics pipeline streaming to data warehouse
  • Dashboard visualizing adoption, efficiency, and meeting patterns
  • Automated weekly/monthly reports delivered to stakeholders
  • Health monitoring alerts configured for operational issues

Error Handling

ErrorCauseFix
Missing data in pipelineZapier trigger failedCheck Zap history, reconnect if needed
Duplicate entries in BigQueryZapier retry on timeoutAdd deduplication (MERGE or INSERT IGNORE)
Dashboard shows stale dataPipeline pausedMonitor Zapier health, restart paused Zaps
Low adoption alert false positiveNew seats just addedAdjust alert threshold, use percentage not absolute

Resources

Next Steps

Proceed to

granola-incident-runbook
for incident response procedures.