Awesome-omni-skills supabase-automation
Supabase Automation via Rube MCP workflow skill. Use this skill when the user needs Automate Supabase database queries, table management, project administration, storage, edge functions, and SQL execution via Rube MCP (Composio). Always search tools first for current schemas and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/supabase-automation" ~/.claude/skills/diegosouzapw-awesome-omni-skills-supabase-automation && rm -rf "$T"
skills/supabase-automation/SKILL.mdSupabase Automation via Rube MCP
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/supabase-automation from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Supabase Automation via Rube MCP Automate Supabase operations including database queries, table schema inspection, SQL execution, project and organization management, storage buckets, edge functions, and service health monitoring through Composio's Supabase toolkit.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Prerequisites, Common Patterns, Known Pitfalls, Limitations.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- This skill is applicable to execute the workflow or actions described in the overview.
- Use when the request clearly matches the imported source intent: Automate Supabase database queries, table management, project administration, storage, edge functions, and SQL execution via Rube MCP (Composio). Always search tools first for current schemas.
- Use when the operator should preserve upstream workflow detail instead of rewriting the process from scratch.
- Use when provenance needs to stay visible in the answer, PR, or review packet.
- Use when copied upstream references, examples, or scripts materially improve the answer.
- Use when the workflow should remain reviewable in the public intake repo before the private enhancer takes over.
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Verify Rube MCP is available by confirming RUBESEARCHTOOLS responds
- Call RUBEMANAGECONNECTIONS with toolkit supabase
- If connection is not ACTIVE, follow the returned auth link to complete Supabase authentication
- Confirm connection status shows ACTIVE before running any workflows
- SUPABASELISTALLPROJECTS - List projects to find the target projectref [Prerequisite]
- SUPABASELISTTABLES - List all tables and views in the database [Prerequisite]
- SUPABASEGETTABLE_SCHEMAS - Get detailed column types, constraints, and relationships [Prerequisite for writes]
Imported Workflow Notes
Imported: Setup
Get Rube MCP: Add
https://rube.app/mcp as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
- Verify Rube MCP is available by confirming
respondsRUBE_SEARCH_TOOLS - Call
with toolkitRUBE_MANAGE_CONNECTIONSsupabase - If connection is not ACTIVE, follow the returned auth link to complete Supabase authentication
- Confirm connection status shows ACTIVE before running any workflows
Imported: Core Workflows
1. Query and Manage Database Tables
When to use: User wants to read data from tables, inspect schemas, or perform CRUD operations
Tool sequence:
- List projects to find the target project_ref [Prerequisite]SUPABASE_LIST_ALL_PROJECTS
- List all tables and views in the database [Prerequisite]SUPABASE_LIST_TABLES
- Get detailed column types, constraints, and relationships [Prerequisite for writes]SUPABASE_GET_TABLE_SCHEMAS
- Query rows with filtering, sorting, and pagination [Required for reads]SUPABASE_SELECT_FROM_TABLE
- Execute arbitrary SQL for complex queries, inserts, updates, or deletes [Required for writes]SUPABASE_BETA_RUN_SQL_QUERY
Key parameters for SELECT_FROM_TABLE:
: 20-character lowercase project referenceproject_ref
: Table or view name to querytable
: Comma-separated column list (supports nested selections and JSON paths likeselect
)profile->avatar_url
: Array of filter objects withfilters
,column
,operatorvalue
: Sort expression likeordercreated_at.desc
: Max rows to return (minimum 1)limit
: Rows to skip for paginationoffset
PostgREST filter operators:
,eq
: Equal / not equalneq
,gt
,gte
,lt
: Comparison operatorslte
,like
: Pattern matching (case-sensitive / insensitive)ilike
: IS check (for null, true, false)is
: In a list of valuesin
,cs
: Contains / contained by (arrays)cd
,fts
,plfts
,phfts
: Full-text search variantswfts
Key parameters for RUN_SQL_QUERY:
: Project reference (20 lowercase letters, patternref
)^[a-z]{20}$
: Valid PostgreSQL SQL statementquery
: Boolean to force read-only transaction (safer for SELECTs)read_only
Pitfalls:
must be exactly 20 lowercase letters (a-z only, no numbers or hyphens)project_ref
is read-only; useSELECT_FROM_TABLE
for INSERT, UPDATE, DELETE operationsRUN_SQL_QUERY- For PostgreSQL array columns (text[], integer[]), use
orARRAY['item1', 'item2']
syntax, NOT JSON array syntax'{"item1", "item2"}''["item1", "item2"]' - SQL identifiers that are case-sensitive must be double-quoted in queries
- Complex DDL operations may timeout (~60 second limit); break into smaller queries
- ERROR 42P01 "relation does not exist" usually means unquoted case-sensitive identifiers
- ERROR 42883 "function does not exist" means you are calling non-standard helpers; prefer information_schema queries
2. Manage Projects and Organizations
When to use: User wants to list projects, inspect configurations, or manage organizations
Tool sequence:
- List all organizations (IDs and names) [Required]SUPABASE_LIST_ALL_ORGANIZATIONS
- Get detailed org info by slug [Optional]SUPABASE_GETS_INFORMATION_ABOUT_THE_ORGANIZATION
- List org members with roles and MFA status [Optional]SUPABASE_LIST_MEMBERS_OF_AN_ORGANIZATION
- List all projects with metadata [Required]SUPABASE_LIST_ALL_PROJECTS
- Get database configuration [Optional]SUPABASE_GETS_PROJECT_S_POSTGRES_CONFIG
- Get authentication configuration [Optional]SUPABASE_GETS_PROJECT_S_AUTH_CONFIG
- Get API keys (sensitive -- handle carefully) [Optional]SUPABASE_GET_PROJECT_API_KEYS
- Check service health [Optional]SUPABASE_GETS_PROJECT_S_SERVICE_HEALTH_STATUS
Key parameters:
: Project reference for project-specific toolsref
: Organization slug (URL-friendly identifier) for org toolsslug
: Array of services for health check:services
,auth
,db
,db_postgres_user
,pg_bouncer
,pooler
,realtime
,reststorage
Pitfalls:
returns bothLIST_ALL_ORGANIZATIONS
andid
;slug
expectsLIST_MEMBERS_OF_AN_ORGANIZATION
, notslugid
returns live secrets -- NEVER log, display, or persist full key valuesGET_PROJECT_API_KEYS
requires a non-emptyGETS_PROJECT_S_SERVICE_HEALTH_STATUS
array; empty array causes invalid_request errorservices- Config tools may return 401/403 if token lacks required scope; handle gracefully rather than failing the whole workflow
3. Inspect Database Schema
When to use: User wants to understand table structure, columns, constraints, or generate types
Tool sequence:
- Find the target project [Prerequisite]SUPABASE_LIST_ALL_PROJECTS
- Enumerate all tables and views with metadata [Required]SUPABASE_LIST_TABLES
- Get detailed schema for specific tables [Required]SUPABASE_GET_TABLE_SCHEMAS
- Generate TypeScript types from schema [Optional]SUPABASE_GENERATE_TYPE_SCRIPT_TYPES
Key parameters for LIST_TABLES:
: Project referenceproject_ref
: Array of schema names to search (e.g.,schemas
); omit for all non-system schemas["public"]
: Include views alongside tables (default true)include_views
: Include row count estimates and sizes (default true)include_metadata
: Include pg_catalog, information_schema, etc. (default false)include_system_schemas
Key parameters for GET_TABLE_SCHEMAS:
: Project referenceproject_ref
: Array of table names (max 20 per request); supports schema prefix liketable_names
,public.usersauth.users
: Include foreign key info (default true)include_relationships
: Include index info (default true)include_indexes
: Cleaner output by hiding null fields (default true)exclude_null_values
Key parameters for GENERATE_TYPE_SCRIPT_TYPES:
: Project referenceref
: Comma-separated schema names (defaultincluded_schemas
)"public"
Pitfalls:
- Table names without schema prefix assume
schemapublic
androw_count
from LIST_TABLES may be null for views or recently created tables; treat as unknown, not zerosize_bytes- GET_TABLE_SCHEMAS has a max of 20 tables per request; batch if needed
- TypeScript types include all tables in specified schemas; cannot filter individual tables
4. Manage Edge Functions
When to use: User wants to list, inspect, or work with Supabase Edge Functions
Tool sequence:
- Find the project reference [Prerequisite]SUPABASE_LIST_ALL_PROJECTS
- List all edge functions with metadata [Required]SUPABASE_LIST_ALL_FUNCTIONS
- Get detailed info for a specific function [Optional]SUPABASE_RETRIEVE_A_FUNCTION
Key parameters:
: Project referenceref- Function slug for RETRIEVE_A_FUNCTION
Pitfalls:
returns metadata only, not function code or logsLIST_ALL_FUNCTIONS
andcreated_at
may be epoch milliseconds; convert to human-readable timestampsupdated_at- These tools cannot create or deploy edge functions; they are read-only inspection tools
- Permission errors may occur without org/project admin rights
5. Manage Storage Buckets
When to use: User wants to list storage buckets or manage file storage
Tool sequence:
- Find the project reference [Prerequisite]SUPABASE_LIST_ALL_PROJECTS
- List all storage buckets [Required]SUPABASE_LISTS_ALL_BUCKETS
Key parameters:
: Project referenceref
Pitfalls:
returns bucket list only, not bucket contents or access policiesLISTS_ALL_BUCKETS- For file uploads,
handles CORS preflight for TUS resumable uploads onlySUPABASE_RESUMABLE_UPLOAD_SIGN_OPTIONS_WITH_ID - Direct file operations may require using
with the Supabase storage APIproxy_execute
Imported: Prerequisites
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
- Active Supabase connection via
with toolkitRUBE_MANAGE_CONNECTIONSsupabase - Always call
first to get current tool schemasRUBE_SEARCH_TOOLS
Examples
Example 1: Ask for the upstream workflow directly
Use @supabase-automation to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @supabase-automation against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @supabase-automation for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @supabase-automation using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- Keep the imported skill grounded in the upstream repository; do not invent steps that the source material cannot support.
- Prefer the smallest useful set of support files so the workflow stays auditable and fast to review.
- Keep provenance, source commit, and imported file paths visible in notes and PR descriptions.
- Point directly at the copied upstream files that justify the workflow instead of relying on generic review boilerplate.
- Treat generated examples as scaffolding; adapt them to the concrete task before execution.
- Route to a stronger native skill when architecture, debugging, design, or security concerns become dominant.
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/supabase-automation, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@server-management
- Use when the work is better handled by that native specialization after this imported skill establishes context.@service-mesh-expert
- Use when the work is better handled by that native specialization after this imported skill establishes context.@service-mesh-observability
- Use when the work is better handled by that native specialization after this imported skill establishes context.@sexual-health-analyzer
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Quick Reference
| Task | Tool Slug | Key Params |
|---|---|---|
| List organizations | | (none) |
| Get org info | | |
| List org members | | |
| List projects | | (none) |
| List tables | | , |
| Get table schemas | | , |
| Query table | | , , , |
| Run SQL | | , , |
| Generate TS types | | , |
| Postgres config | | |
| Auth config | | |
| Get API keys | | |
| Service health | | , |
| List edge functions | | |
| Get edge function | | , function slug |
| List storage buckets | | |
| List DB branches | | |
Imported: Common Patterns
ID Resolution
- Project reference:
-- extractSUPABASE_LIST_ALL_PROJECTS
field (20 lowercase letters)ref - Organization slug:
-- useSUPABASE_LIST_ALL_ORGANIZATIONS
(notslug
) for downstream org toolsid - Table names:
-- enumerate available tables before queryingSUPABASE_LIST_TABLES - Schema discovery:
-- inspect columns and constraints before writesSUPABASE_GET_TABLE_SCHEMAS
Pagination
: UsesSUPABASE_SELECT_FROM_TABLE
+offset
pagination. Increment offset by limit until fewer rows than limit are returned.limit
: May paginate for large accounts; follow cursors/pages until exhausted.SUPABASE_LIST_ALL_PROJECTS
: May paginate for large databases.SUPABASE_LIST_TABLES
SQL Best Practices
- Always use
orSUPABASE_GET_TABLE_SCHEMAS
before writing SQLSUPABASE_LIST_TABLES - Use
for SELECT queries to prevent accidental mutationsread_only: true - Quote case-sensitive identifiers:
notSELECT * FROM "MyTable"SELECT * FROM MyTable - Use PostgreSQL array syntax for array columns:
notARRAY['a', 'b']['a', 'b'] - Break complex DDL into smaller statements to avoid timeouts
Imported: Known Pitfalls
ID Formats
- Project references are exactly 20 lowercase letters (a-z): pattern
^[a-z]{20}$ - Organization identifiers come as both
(UUID) andid
(URL-friendly string); tools vary in which they acceptslug
requiresLIST_MEMBERS_OF_AN_ORGANIZATION
, notslugid
SQL Execution
has ~60 second timeout for complex operationsBETA_RUN_SQL_QUERY- PostgreSQL array syntax required:
orARRAY['item']
, NOT JSON syntax'{"item"}''["item"]' - Case-sensitive identifiers must be double-quoted in SQL
- ERROR 42P01: relation does not exist (check quoting and schema prefix)
- ERROR 42883: function does not exist (use information_schema instead of custom helpers)
Sensitive Data
returns service-role keys -- NEVER expose full valuesGET_PROJECT_API_KEYS- Auth config tools exclude secrets but may still contain sensitive configuration
- Always mask or truncate API keys in output
Schema Metadata
androw_count
fromsize_bytes
can be null; do not treat as zeroLIST_TABLES- System schemas are excluded by default; set
to see theminclude_system_schemas: true - Views appear alongside tables unless
include_views: false
Rate Limits and Permissions
- Enrichment tools (API keys, configs) may return 401/403 without proper scopes; skip gracefully
- Large table listings may require pagination
fails with emptyGETS_PROJECT_S_SERVICE_HEALTH_STATUS
array -- always specify at least oneservices
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.