Awesome-Agent-Skills-for-Empirical-Research education-data-query
git clone https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research
T=$(mktemp -d) && git clone --depth=1 https://github.com/brycewang-stanford/Awesome-Agent-Skills-for-Empirical-Research "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/17-DAAF-Contribution-Community-daaf/dot-claude/skills/education-data-query" ~/.claude/skills/brycewang-stanford-awesome-agent-skills-for-empirical-research-education-data-qu && rm -rf "$T"
skills/17-DAAF-Contribution-Community-daaf/dot-claude/skills/education-data-query/SKILL.mdEducation Data Query
Downloads education datasets from configured mirror sources (parquet or CSV) using priority-ordered fallback, with local Polars filtering. Use when writing Stage 5 fetch scripts, downloading a specific CCD, IPEDS, CRDC, SAIPE, or other education dataset by path, discovering which files are available on a mirror, or retrieving codebook metadata. Load after using education-data-explorer to identify endpoints — this skill handles actual data retrieval, not endpoint discovery.
Download datasets from the Education Data Portal via configured mirror sources (defined in mirrors.yaml). Mirrors are tried in priority order. All filtering is done locally with Polars. The mirror data originates from the Urban Institute Education Data Portal (EDP), which is a curation and standardization layer over original federal data sources — data has been restructured with lowercase variable names, integer-encoded categoricals, and standardized missing value codes (
-1, -2, -3).
What This Skill Does
- Download education datasets from configured mirrors
- Handle multiple file formats (parquet, CSV) based on mirror read_strategy
- Apply year, state, and demographic filters locally with Polars
- Discover available files via each mirror's discovery endpoint
Skill Provenance Note: Each
skill includes*-data-source-*in its frontmatter. Before fetching data, check this date — if it is more than a few months old, the source skill's documentation about column definitions, coded values, and quality patterns may have drifted from the current data. Consider re-running data-ingest to re-verify before relying on stale skill guidance for query construction.provenance.skill_last_updated
Reference File Structure
| File | Purpose | When to Read |
|---|---|---|
| Mirror URLs, priority, format, timeouts, metadata config | Understanding mirror configuration |
| Code patterns for mirror-based fetching | Writing Stage 5 fetch scripts |
| Known dataset file paths by source | Finding the right file path for a dataset |
| Complete filter variables | Filtering downloaded data locally |
| Endpoint path structure reference | Understanding URL/path naming conventions |
Mirror System Overview
Data is fetched by downloading files from mirrors:
Fetch Request (dataset, years, filters) → Try each mirror in priority order (per mirrors.yaml) → Build URL from mirror's url_template + dataset paths → Read using mirror's read_strategy (eager_parquet, lazy_csv, etc.) → If all mirrors fail: STOP and escalate → Save to data/raw/*.parquet → CP1 validation (source-agnostic)
Mirror Configuration
Mirrors are defined in
./references/mirrors.yaml with priority ordering. Each mirror specifies:
— how to build download URLsurl_template
— how Polars reads the format (eager_parquet, lazy_csv)read_strategy
— how to check what files are availablediscovery
See
./references/mirrors.yaml for the full configuration and instructions on adding new mirrors.
Mirror File Discovery
Before fetching, you can check what files are available using each mirror's discovery endpoint (defined in mirrors.yaml):
# Generic discovery — works with any mirror that supports it # See fetch-patterns.md for the full discover_mirror_files() function from fetch_patterns import discover_mirror_files # Check primary mirror files = discover_mirror_files(MIRRORS[0]) if files is not None: print(f"Available files: {len(files)}")
This eliminates guessing — if the file exists in a mirror, use it; if not, fall through to the next.
Decision Trees
"How should I get this data?"
What dataset do you need? ├─ Know the exact file path? │ └─ Use fetch_from_mirrors() with that path → ./references/fetch-patterns.md ├─ Know the source but not the exact filename? │ └─ Check ./references/datasets-reference.md for known paths ├─ Not sure what's available? │ └─ Query mirror discovery endpoint to list all files → ./references/fetch-patterns.md ├─ Need a codebook or metadata file? │ └─ Check codebook column in ./references/datasets-reference.md → get_codebook_url() in ./references/fetch-patterns.md └─ Dataset not in any mirror? └─ STOP and escalate — dataset may need to be added to mirror
"Is my dataset a single file or yearly files?"
Check datasets-reference.md: ├─ Type = "Single" → One file with all years │ └─ Use fetch_from_mirrors() → filter years locally └─ Type = "Yearly" → One file per year └─ Use fetch_yearly_from_mirrors() → concatenate results
"How do I filter results?"
All filtering is done locally with Polars after download:
# By state df = df.filter(pl.col("fips") == 6) # California # By year df = df.filter(pl.col("year").is_in([2020, 2021, 2022])) # By school type df = df.filter(pl.col("charter") == 1) # Multiple filters df = df.filter( (pl.col("fips") == 6) & (pl.col("charter") == 1) & (pl.col("school_level") == 3) )
Dataset Path Structure
All mirrors use the same canonical path. Each mirror appends its own format extension (
.parquet, .csv) via its url_template in mirrors.yaml:
{source}/{filename}
| Component | Description | Examples |
|---|---|---|
| Data source | , , , , |
| Dataset file | , |
Example paths:
(SAIPE district poverty)saipe/districts_saipe
(CCD school directory)ccd/schools_ccd_directory
(CCD enrollment, yearly)ccd/schools_ccd_enrollment_2022
See
./references/datasets-reference.md for the complete file path listing.
Format Handling
Format-specific read behavior is driven by each mirror's
read_strategy field (see mirrors.yaml):
eager_parquet
eager_parquetdf = pl.read_parquet(url) # Polars reads HTTP URLs natively
lazy_csv
lazy_csv# Always use lazy loading for large files df = ( pl.scan_csv(url, infer_schema_length=10000) .filter(pl.col("year").is_in(YEARS)) .filter(pl.col("fips") == STATE_FIPS) .collect() )
See
./references/fetch-patterns.md for complete code patterns.
Portal Integer Encoding
CRITICAL: The Portal uses integer codes, not string labels. This affects filtering and interpretation.
Demographic Variable Encodings
| Variable | Integer Values | NOT These Strings |
|---|---|---|
| Race | 1-7, 99 (total) | WH, BL, HI, AS, etc. |
| Sex | 1 (Male), 2 (Female), 3 (Another gender, IPEDS 2022+), 4 (Unknown gender, IPEDS 2022+), 9 (Unknown), 99 (Total) | M, F |
| Grade | -1 to 13, 99 (total) | PK, KG, 01, etc. |
Grade Encoding (SEMANTIC TRAP!)
| Value | Meaning | URL Path Equivalent |
|---|---|---|
| -1 | Pre-K (NOT missing!) | |
| 0 | Kindergarten | |
| 1-12 | Grades 1-12 | to |
| 99 | Total | |
# WRONG - filters out Pre-K students! df = df.filter(pl.col("grade") >= 0) # RIGHT - Pre-K students have grade = -1 pre_k = df.filter(pl.col("grade") == -1) total = df.filter(pl.col("grade") == 99)
Variable Names Are Lowercase
Portal variable names are lowercase:
notenrollmentMEMBER
notgradeGRADE
notfipsFIPS
See
./references/filters-reference.md for complete encoding tables.
Common FIPS Codes
| Code | State | Code | State | Code | State |
|---|---|---|---|---|---|
| 1 | Alabama | 17 | Illinois | 36 | New York |
| 2 | Alaska | 18 | Indiana | 37 | North Carolina |
| 4 | Arizona | 19 | Iowa | 39 | Ohio |
| 5 | Arkansas | 20 | Kansas | 40 | Oklahoma |
| 6 | California | 21 | Kentucky | 41 | Oregon |
| 8 | Colorado | 22 | Louisiana | 42 | Pennsylvania |
| 9 | Connecticut | 24 | Maryland | 44 | Rhode Island |
| 10 | Delaware | 25 | Massachusetts | 45 | South Carolina |
| 11 | DC | 26 | Michigan | 47 | Tennessee |
| 12 | Florida | 27 | Minnesota | 48 | Texas |
| 13 | Georgia | 29 | Missouri | 49 | Utah |
| 15 | Hawaii | 32 | Nevada | 51 | Virginia |
| 16 | Idaho | 34 | New Jersey | 53 | Washington |
See
./references/filters-reference.md for complete list.
Cross-References
- Discover endpoints: Load
skill to browse available endpoints and variableseducation-data-explorer - Interpret data: Load
skill after fetching for variable meanings and caveatseducation-data-context - Deep source understanding: Load
skills for comprehensive methodologyeducation-data-source-*
Data Source Skills Quick Reference
| Source | Skill | Key Fetch Considerations |
|---|---|---|
| CCD | | Use grade-99 for totals; FRPL affected by CEP |
| CRDC | | Biennial only; 2015+ for complete coverage; CSV requires for ID cols (see CRDC skill) |
| EDFacts | | Use vars; states not comparable |
| IPEDS | | GRS limited to first-time full-time |
| Scorecard | | High suppression; Title IV recipients only |
| SAIPE | | Model estimates; population not enrollment |
| FSA | | Federal aid only; 1-3 year lag |
| MEPS | | Better than FRPL for cross-state |
| PSEO | | Experimental; check state coverage |
Topic Index
| Topic | Location |
|---|---|
| Mirror configuration | |
| Fetch code patterns | |
| Dataset file paths | |
| URL/path naming conventions | |
| Filter variables | |
| Codebook/metadata URLs | (codebook column), (get_codebook_url) |
| FIPS codes | This file, |
| CCD source details | skill |
| CRDC source details | skill |
| EDFacts source details | skill |
| IPEDS source details | skill |
| Scorecard source details | skill |
| SAIPE source details | skill |
| FSA source details | skill |
| MEPS source details | skill |
| NHGIS source details | skill |