Sf-skills sf-data

install
source · Clone the upstream repo
git clone https://github.com/Jaganpro/sf-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Jaganpro/sf-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/sf-data" ~/.claude/skills/jaganpro-sf-skills-sf-data && rm -rf "$T"
manifest: skills/sf-data/SKILL.md
source content

Salesforce Data Operations Expert (sf-data)

Use this skill when the user needs Salesforce data work: record CRUD, bulk import/export, test data generation, cleanup scripts, or data factory patterns for validating Apex, Flow, or integration behavior.

When This Skill Owns the Task

Use

sf-data
when the work involves:

  • sf data
    CLI commands
  • record creation, update, delete, upsert, export, or tree import/export
  • realistic test data generation
  • bulk data operations and cleanup
  • Apex anonymous scripts for data seeding / rollback

Delegate elsewhere when the user is:


Important Mode Decision

Confirm which mode the user wants:

ModeUse when
Script generationthey want reusable
.apex
, CSV, or JSON assets without touching an org yet
Remote executionthey want records created / changed in a real org now

Do not assume remote execution if the user may only want scripts.


Required Context to Gather First

Ask for or infer:

  • target object(s)
  • org alias, if remote execution is required
  • operation type: query, create, update, delete, upsert, import, export, cleanup
  • expected volume
  • whether this is test data, migration data, or one-off troubleshooting data
  • any parent-child relationships that must exist first

Core Operating Rules

  • sf-data
    acts on remote org data unless the user explicitly wants local script generation.
  • Objects and fields must already exist before data creation.
  • For automation testing, prefer 251+ records when bulk behavior matters.
  • Always think about cleanup before creating large or noisy datasets.
  • Never use real PII in generated test data.
  • Prefer CLI-first for straightforward CRUD; use anonymous Apex when the operation truly needs server-side orchestration.

If metadata is missing, stop and hand off to:


Recommended Workflow

1. Verify prerequisites

Confirm object / field availability, org auth, and required parent records.

2. Run describe-first pre-flight validation when schema is uncertain

Before creating or updating records, use object describe data to validate:

  • required fields
  • createable vs non-createable fields
  • picklist values
  • relationship fields and parent requirements

Example pattern:

sf sobject describe --sobject ObjectName --target-org <alias> --json

Helpful filters:

# Required + createable fields
jq '.result.fields[] | select(.nillable==false and .createable==true) | {name, type}'

# Valid picklist values for one field
jq '.result.fields[] | select(.name=="StageName") | .picklistValues[].value'

# Fields that cannot be set on create
jq '.result.fields[] | select(.createable==false) | .name'

3. Choose the smallest correct mechanism

NeedDefault approach
small one-off CRUD
sf data
single-record commands
large import/exportBulk API 2.0 via
sf data ... bulk
parent-child seed settree import/export
reusable test datasetfactory / anonymous Apex script
reversible experimentcleanup script or savepoint-based approach

4. Execute or generate assets

Use the built-in templates under

assets/
when they fit:

  • assets/factories/
  • assets/bulk/
  • assets/cleanup/
  • assets/soql/
  • assets/csv/
  • assets/json/

5. Verify results

Check counts, relationships, and record IDs after creation or update.

6. Apply a bounded retry strategy

If creation fails:

  1. try the primary CLI shape once
  2. retry once with corrected parameters
  3. re-run describe / validate assumptions
  4. pivot to a different mechanism or provide a manual workaround

Do not repeat the same failing command indefinitely.

7. Leave cleanup guidance

Provide exact cleanup commands or rollback assets whenever data was created.


High-Signal Rules

Bulk safety

  • use bulk operations for large volumes
  • test automation-sensitive behavior with 251+ records where appropriate
  • avoid one-record-at-a-time patterns for bulk scenarios

Data integrity

  • include required fields
  • validate picklist values before creation
  • verify parent IDs and relationship integrity
  • account for validation rules and duplicate constraints
  • exclude non-createable fields from input payloads

Cleanup discipline

Prefer one of:

  • delete-by-ID
  • delete-by-pattern
  • delete-by-created-date window
  • rollback / savepoint patterns for script-based test runs

Common Failure Patterns

ErrorLikely causeDefault fix direction
INVALID_FIELD
wrong field API name or FLS issueverify schema and access
REQUIRED_FIELD_MISSING
mandatory field omittedinclude required values from describe data
INVALID_CROSS_REFERENCE_KEY
bad parent IDcreate / verify parent first
FIELD_CUSTOM_VALIDATION_EXCEPTION
validation rule blocked the recorduse valid test data or adjust setup
invalid picklist valueguessed value instead of describe-backed valueinspect picklist values first
non-writeable field errorfield is not createable / updateableremove it from the payload
bulk limits / timeoutswrong tool for the volumeswitch to bulk / staged import

Output Format

When finishing, report in this order:

  1. Operation performed
  2. Objects and counts
  3. Target org or local artifact path
  4. Record IDs / output files
  5. Verification result
  6. Cleanup instructions

Suggested shape:

Data operation: <create / update / delete / export / seed>
Objects: <object + counts>
Target: <org alias or local path>
Artifacts: <record ids / csv / apex / json files>
Verification: <passed / partial / failed>
Cleanup: <exact delete or rollback guidance>

Cross-Skill Integration

NeedDelegate toReason
discover object / field structuresf-metadataaccurate schema grounding
run bulk-sensitive Apex validationsf-testingtest execution and coverage
deploy missing schema firstsf-deploymetadata readiness
implement production logic consuming the datasf-apex or sf-flowbehavior implementation

Reference Map

Start here

Query / bulk / cleanup

Examples / limits


Score Guide

ScoreMeaning
117+strong production-safe data workflow
104–116good operation with minor improvements possible
91–103acceptable but review advised
78–90partial / risky patterns present
< 78blocked until corrected