Skillsbench gamma-phase-associator
An overview of the python package for running the GaMMA earthquake phase association algorithm. The algorithm expects phase picks data and station data as input and produces (through unsupervised clustering) earthquake events with source information like earthquake location, origin time and magnitude. The skill explains commonly used functions and the expected input/output format.
git clone https://github.com/benchflow-ai/skillsbench
T=$(mktemp -d) && git clone --depth=1 https://github.com/benchflow-ai/skillsbench "$T" && mkdir -p ~/.claude/skills && cp -r "$T/tasks/earthquake-phase-association/environment/skills/gamma-phase-associator" ~/.claude/skills/benchflow-ai-skillsbench-gamma-phase-associator && rm -rf "$T"
tasks/earthquake-phase-association/environment/skills/gamma-phase-associator/SKILL.mdGaMMA Associator Library
What is GaMMA?
GaMMA is an earthquake phase association algorithm that treats association as an unsupervised clustering problem. It uses multivariate Gaussian distribution to model the collection of phase picks of an event, and uses Expectation-Maximization to carry out pick assignment and estimate source parameters i.e., earthquake location, origin time, and magnitude.
GaMMA is a python library implementing the algorithm. For the input earthquake traces, this library assumes P/S wave picks have already been extracted. We provide documentation of its core API.
Zhu, W., McBrearty, I. W., Mousavi, S. M., Ellsworth, W. L., & Beroza, G. C. (2022). Earthquake phase association using a Bayesian Gaussian mixture model. Journal of Geophysical Research: Solid Earth, 127(5).
The skill is a derivative of the repo https://github.com/AI4EPS/GaMMA
Installing GaMMA
pip install git+https://github.com/wayneweiqiang/GaMMA.git
GaMMA core API
association
associationFunction Signature
def association(picks, stations, config, event_idx0=0, method="BGMM", **kwargs)
Purpose
Associates seismic phase picks (P and S waves) to earthquake events using Bayesian or standard Gaussian Mixture Models. It clusters picks based on arrival time and amplitude information, then fits GMMs to estimate earthquake locations, times, and magnitudes.
1. Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| DataFrame | required | Seismic phase pick data |
| DataFrame | required | Station metadata with locations |
| dict | required | Configuration parameters |
| int | | Starting event index for numbering |
| str | | (Bayesian) or (standard) |
2. Required DataFrame Columns
picks
DataFrame
picks| Column | Type | Description | Example |
|---|---|---|---|
| str | Station identifier (must match ) | or |
| datetime/str | Pick arrival time (ISO format or datetime) | |
| str | Phase type: or (lowercase) | |
| float | Pick probability/weight (0-1) | |
| float | Amplitude in m/s (required if ) | |
Notes:
- Timestamps must be in UTC or converted to UTC
- Phase types are forced to lowercase internally
- Picks with
oramp == 0
are filtered whenamp == -1use_amplitude=True - The DataFrame index is used to track pick identities in the output
stations
DataFrame
stations| Column | Type | Description | Example |
|---|---|---|---|
| str | Station identifier | |
| float | X coordinate in km (projected) | |
| float | Y coordinate in km (projected) | |
| float | Z coordinate (elevation, typically negative) | |
Notes:
- Coordinates should be in a projected local coordinate system (e.g., you can use the
package)pyproj - The
column must match theid
values in the picks DataFrame (e.g.,id
ornetwork.station.
)network.station.location.channel - Group stations by unique
, identical attribute are collapsed to a single value and conflicting metadata are preseved as a sorted list.id
3. Config Dictionary Keys
Required Keys
| Key | Type | Description | Example |
|---|---|---|---|
| list[str] | Location dimensions to solve for | |
| int | Minimum picks required per earthquake | |
| float | Maximum allowed time residual in seconds | |
| bool | Whether to use amplitude in clustering | |
| tuple | Bounds for BFGS optimization | |
| float | Factor for oversampling initial GMM components | for , for |
Notes on
:dims
- Options:
,["x(km)", "y(km)", "z(km)"]
, or["x(km)", "y(km)"]["x(km)"]
Notes on
:bfgs_bounds
- Format:
((x_min, x_max), (y_min, y_max), (z_min, z_max), (None, None)) - The last tuple is for time (unbounded)
Velocity Model Keys
| Key | Type | Default | Description |
|---|---|---|---|
| dict | | Uniform velocity model (km/s) |
| dict/None | | 1D velocity model for travel times |
DBSCAN Pre-clustering Keys (Optional)
| Key | Type | Default | Description |
|---|---|---|---|
| bool | | Enable DBSCAN pre-clustering |
| float | | Max time between picks (seconds) |
| int | | Min samples in DBSCAN neighborhood |
| int | | Min cluster size for hierarchical splitting |
| float | | Max time/space ratio for splitting |
is obtained fromdbscan_eps
Functionestimate_eps
Filtering Keys (Optional)
| Key | Type | Default | Description | |-----|------|-------------| |
max_sigma22 | float | 1.0 | Max phase amplitude residual in log scale (required if use_amplitude=True) |
| max_sigma12 | float | 1.0 | Max covariance |
| max_sigma11 | float | 2.0 | Max phase time residual (s) |
| min_p_picks_per_eq | int | 0 | Min P-phase picks per event |
| min_s_picks_per_eq | int | 0 |Min S-phase picks per event |
| min_stations | int | 5 |Min unique stations per event |
Other Optional Keys
| Key | Type | Default | Description |
|---|---|---|---|
| list[float] | auto | Prior for covariance |
| int | auto | Number of CPUs for parallel processing |
4. Return Values
Returns a tuple
(events, assignments):
events
(list[dict])
eventsList of dictionaries, each representing an associated earthquake:
| Key | Type | Description |
|---|---|---|
| str | Origin time (ISO 8601 with milliseconds) |
| float | Estimated magnitude (999 if ) |
| float | Time uncertainty (seconds) |
| float | Amplitude uncertainty (log10 scale) |
| float | Time-amplitude covariance |
| float | Association quality score |
| int | Total picks assigned |
| int | P-phase picks assigned |
| int | S-phase picks assigned |
| int | Unique event index |
| float | X coordinate of hypocenter |
| float | Y coordinate of hypocenter |
| float | Z coordinate (depth) |
assignments
(list[tuple])
assignmentsList of tuples
(pick_index, event_index, gamma_score):
: Index in the originalpick_index
DataFramepicks
: Associated event indexevent_index
: Probability/confidence of assignmentgamma_score
estimate_eps
Function Documentation
estimate_epsFunction Signature
def estimate_eps(stations, vp, sigma=2.0)
Purpose
Estimates an appropriate DBSCAN epsilon (eps) parameter for clustering seismic phase picks based on station spacing. The eps parameter controls the maximum time distance between picks that should be considered neighbors in the DBSCAN clustering algorithm.
1. Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| DataFrame | required | Station metadata with 3D coordinates |
| float | required | P-wave velocity in km/s |
| float | | Number of standard deviations above the mean |
2. Required DataFrame Columns
stations
DataFrame
stations| Column | Type | Description | Example |
|---|---|---|---|
| float | X coordinate in km | |
| float | Y coordinate in km | |
| float | Z coordinate in km | |
3. Return Value
| Type | Description |
|---|---|
| float | Epsilon value in seconds for use with DBSCAN clustering |
4. Example Usage
from gamma.utils import estimate_eps # Assuming stations DataFrame is already prepared with x(km), y(km), z(km) columns vp = 6.0 # P-wave velocity in km/s # Estimate eps automatically based on station spacing eps = estimate_eps(stations, vp, sigma=2.0) # Use in config config = { "use_dbscan": True, "dbscan_eps": eps, # or use estimate_eps(stations, config["vel"]["p"]) "dbscan_min_samples": 3, # ... other config options }
Typical Usage Pattern
from gamma.utils import association, estimate_eps # Automatic eps estimation config["dbscan_eps"] = estimate_eps(stations, config["vel"]["p"]) # Or manual override (common in practice) config["dbscan_eps"] = 15 # seconds
5. Practical Notes
- In example notebooks, the function is often commented out in favor of hardcoded values (10-15 seconds)
- Practitioners may prefer manual tuning for specific networks/regions
- Typical output values range from 10-20 seconds depending on station density
- Useful when optimal eps is unknown or when working with new networks
6. Related Configuration
The output is typically used with these config parameters:
config["dbscan_eps"] = estimate_eps(stations, config["vel"]["p"]) config["dbscan_min_samples"] = 3 config["dbscan_min_cluster_size"] = 500 config["dbscan_max_time_space_ratio"] = 10