Claude-skill-registry bq-query
Design and execute BigQuery queries. Use for schema exploration, writing SQL from requirements, running queries, checking costs, or validating syntax.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/bq-query" ~/.claude/skills/majiayu000-claude-skill-registry-bq-query && rm -rf "$T"
manifest:
skills/data/bq-query/SKILL.mdsource content
BigQuery Query
Rules: Follow coding-standards for SQL naming and readability.
Prerequisites
Check gcloud configuration before running queries:
gcloud config get-value project
- If authentication error: prompt user to run
, then resumegcloud auth login - If project unset: prompt user to run
gcloud config set project <PROJECT_ID>
Workflow
-
Clarify requirements: Understand what data is needed and why. If requirements are already in context, proceed to next step.
-
Understand schema: Explore available datasets. If schema is already in context, proceed to next step.
bq ls project:dataset # List tables bq show --schema project:dataset.table # Show table schema -
Design query: Write SQL based on requirements and schema.
- Use CTEs for readability
- Use fully-qualified table names:
project.dataset.table - Specify exact date ranges to limit scanned data
- Filter partitioned tables by partition key
- Avoid correlated subqueries (use JOINs/CTEs)
- Filter early with CTEs before joining large tables
- Use
for exploration queriesLIMIT
-
Dry run: Validate syntax and estimate cost
bq query --use_legacy_sql=false --dry_run "SELECT ..."Cost: ~$5/TB. If >2GB, ask user before executing.
-
Execute: Run after successful dry run, once any required user confirmation has been given
bq query --use_legacy_sql=false --format=csv "SELECT ..."