This example uses the public USGS earthquake GeoJSON feed for magnitude 2.5+ earthquakes in the past 30 days:
https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_month.geojson
It shows how to:
- fetch a large JSON payload once
- stash the raw response
- transform it with
jq - stash the reduced dataset
- create two different views from the stashed data without re-running
curl
curl -s \
'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_month.geojson' \
| stash -a source=usgs-earthquakes -a stage=rawThis creates a stash entry containing the full GeoJSON response.
If you want to confirm that it was saved:
stash ls -l -a +source -a +stagestash cat @1 \
| jq '
{
generated: .metadata.generated,
count: .metadata.count,
earthquakes: [
.features[]
| {
time: .properties.time,
magnitude: .properties.mag,
place: .properties.place,
tsunami: .properties.tsunami,
url: .properties.url
}
]
}
' \
| stash -a source=usgs-earthquakes -a stage=reducedNow you have two related entries:
@2is the raw USGS feed@1is the reduced dataset
The expensive network fetch happened only once.
stash cat @1 \
| jq -r '
.earthquakes[]
| [
(.magnitude | tostring),
(.time / 1000 | strftime("%Y-%m-%d %H:%M:%S")),
.tsunami,
.place
]
| @tsv
'This produces a TSV table with:
- magnitude
- UTC time
- tsunami flag
- place
If you want headers too:
{
printf 'mag\ttime_utc\ttsunami\tplace\n'
stash cat @1 \
| jq -r '
.earthquakes[]
| [
(.magnitude | tostring),
(.time / 1000 | strftime("%Y-%m-%d %H:%M:%S")),
.tsunami,
.place
]
| @tsv
'
} | column -t -s $'\t'{
printf 'mag\ttime_utc\ttsunami\tplace\n'
stash cat @1 \
| jq -r '
.earthquakes[]
| select(.place | test("Alaska"; "i"))
| [
(.magnitude | tostring),
(.time / 1000 | strftime("%Y-%m-%d %H:%M:%S")),
.tsunami,
.place
]
| @tsv
'
} | column -t -s $'\t'This gives you a second view, filtered to earthquakes whose place mentions
Alaska, without touching the network again.
Fetch the feed again and stash a new raw snapshot:
curl -s \
'https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_month.geojson' \
| stash -a source=usgs-earthquakes -a stage=rawTransform that new raw snapshot into the same reduced shape and stash it too:
stash cat @1 \
| jq '
{
generated: .metadata.generated,
count: .metadata.count,
earthquakes: [
.features[]
| {
time: .properties.time,
magnitude: .properties.mag,
place: .properties.place,
tsunami: .properties.tsunami,
url: .properties.url
}
]
}
' \
| stash -a source=usgs-earthquakes -a stage=reducedAt this point:
@1is the new reduced snapshot@2is the new raw snapshot@3is the previous reduced snapshot@4is the previous raw snapshot
If you have stashed other data in the meantime, use stash ls -l to find the ids of the entries
you are interested in, and use them in place of @1, @3.
# example
stash ls -l -a +source -a +stage
g5xa4znm 412.0K Tue Apr 1 13:35:40 2026 +0300 01kn4z3q4vv5crxjdkg5xa4znm usgs-earthquakes reduced
4p0rgpda 1.3M Tue Apr 1 13:35:00 2026 +0300 01kn4z358zf1fme79d4p0rgpda usgs-earthquakes raw
w6sz0cbw 411.3K Tue Apr 1 13:12:10 2026 +0300 01kn4y4m147f06r4few6sz0cbw usgs-earthquakes reduced
6ya0x77f 1.3M Tue Apr 1 13:11:21 2026 +0300 01kn4y3xjtj40kksg16ya0x77f usgs-earthquakes rawNow compare the two reduced snapshots and show only earthquakes that are new in the latest fetch:
{
printf 'mag\ttime_utc\ttsunami\tplace\n'
jq -nr \
--slurpfile new <(stash cat @1) \
--slurpfile old <(stash cat @3) '
($old[0].earthquakes | map(.url) | INDEX(.)) as $seen
| $new[0].earthquakes
| map(select($seen[.url] | not))
| .[]
| [
(.magnitude | tostring),
(.time / 1000 | strftime("%Y-%m-%d %H:%M:%S")),
.tsunami,
.place
]
| @tsv
'
} | column -t -s $'\t'This works because the reduced snapshots keep the USGS event url, which is a
stable identifier for each earthquake record.