Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/menu.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,10 @@
* ESI
* [Overview](services/esi/overview.md)
* [Rate Limiting](services/esi/rate-limiting.md)
* [Cursor-Based Pagination](services/esi/pagination/cursor-based.md)
* Pagination
* [Cursor-Based Pagination](services/esi/pagination/cursor-based.md)
* [X-Pages Pagination](services/esi/pagination/x-pages.md)
* [From-ID Pagination](services/esi/pagination/from-id.md)
* [Best Practices](services/esi/best-practices.md)
* [Endpoints](services/esi/endpoints.md)
* services/*/index.md
Expand Down
50 changes: 50 additions & 0 deletions docs/services/esi/pagination/from-id.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# From-id Pagination

From-id pagination uses record IDs to navigate backwards through datasets in chronological order.
This pagination method only allows you to move backwards in time, making it ideal for collecting historical data.

## How pagination works

1. **Initial request**: Make a request without the `from_id` parameter to get the most recent records.
2. **Backward navigation**: Use the `transaction_id` of the last record as `from_id` in your next request. The response will always include that `from_id` record.
3. **Stop condition**: If the response contains more than just the `from_id` record, continue. If it only contains that one record, stop.

### Request parameters

- `from_id`: The ID of a record. The response will contain this record and records that are older.

When you omit the `from_id` parameter, you get the most recent records.

### Data ordering

Records returned by these listing routes are ordered by time, with the most recent records first.
Using `from_id` returns that record and records that are older.

## Initial Data Collection

Start by making your first request without the `from_id` parameter:

```http
GET /<listing-route>
```

You'll receive a response like this:

```json
[
{"id": 100, "name": "A", "created": "2024-01-15T09:15:00Z"},
{"id": 99, "name": "B", "created": "2024-01-15T08:30:00Z"},
{"id": 98, "name": "C", "created": "2024-01-15T07:45:00Z"}
]
```

**What to do:**

1. Store all the records you retrieved.
2. If you find a record you already know, stop.
3. If you are at the end of the record set, use the last `transaction_id` as `from_id` in your next request.
4. The response will always contain that `from_id` record (98). If it only contains that one record, stop. Otherwise continue at step 1.

## Example

--8<-- "snippets/examples/pagination-from-id.md"
39 changes: 39 additions & 0 deletions docs/services/esi/pagination/x-pages.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# X-Pages Pagination

X-Pages pagination uses page numbers to navigate through datasets, with pages starting at 1.
This is a traditional pagination approach where you request specific page numbers to browse through the data.

## How pagination works

1. **Request a page**: Use the `page` request parameter to specify which page you want to retrieve.
2. **Check total pages**: The `X-Pages` response header tells you how many pages are available in total.
3. **Navigate through pages**: Request pages sequentially (1, 2, 3, etc.) until you've retrieved all available data.

### Request parameters

- `page`: The page number to retrieve. Pages start at 1.

### Response headers

- `X-Pages`: The total number of pages available in the dataset.

## Caching considerations

An important caveat with X-Pages pagination is caching behavior.
If the cache expires between fetching two different pages, you may see duplicated items in your results.

This happens because:

- Page 1 might be cached with data from time T.
- By the time you fetch page 2, the cache for page 1 has expired.
- The server generates new data for page 1, which may overlap with what you already retrieved from page 2.

### Solution: don't fetch close to expiry

Some implementations solve this by checking how close page 1 is to the cache expiration time.
If page 1 is within a few seconds of expiring, they first wait for the cache to refresh.
Only then do they fetch the entire set of pages.

## Example

--8<-- "snippets/examples/pagination-x-pages.md"
2 changes: 1 addition & 1 deletion snippets/examples/pagination-cursor.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def monitor_new_data(url, headers, after_token):
return new_records

if __name__ == "__main__":
url = "https://esi.evetech.net/listing" # ... (replace with actual route)
url = "https://esi.evetech.net/..." # ... (replace with actual route)
headers = {
"User-Agent": "ESI-Example/1.0",
"X-Compatibility-Date": "2025-09-30",
Expand Down
53 changes: 53 additions & 0 deletions snippets/examples/pagination-from-id.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import requests
import time

def fetch_records(url, headers, from_id=None):
params = {}
if from_id is not None:
params['from_id'] = from_id

response = requests.get(url, params=params, headers=headers)
response.raise_for_status()
return response.json()

def collect_current_data(url, headers):
all_records = []
from_id = None

while True:
print(f"Fetching records from_id={from_id}...")
records = fetch_records(url, headers, from_id=from_id)

# If the response is empty, there are no records in the dataset.
if not records:
print(" Empty dataset")
break

# The response always contains the from_id record.
# If it only contains that one record, we've reached the end.
if from_id is not None and len(records) == 1:
print(" No more records")
break

# Store all records.
all_records.extend(records)
print(f" Found {len(records)} records; total {len(all_records)} records")

# Use last record's ID for next request (go backwards in time).
from_id = records[-1]['transaction_id']
time.sleep(0.1) # Throttle requests to avoid rate limiting.

print(f"Total: {len(all_records)} records")
return all_records

if __name__ == "__main__":
url = "https://esi.evetech.net/..." # ... (replace with actual route)
headers = {
"User-Agent": "ESI-Example/1.0",
"X-Compatibility-Date": "2025-09-30",
"Authorization": "Bearer <your-token>",
}

all_data = collect_current_data(url, headers)
# ... (do something with all_data)

41 changes: 41 additions & 0 deletions snippets/examples/pagination-x-pages.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import requests
import time

def fetch_page(url, headers, page=1):
params = {
'page': page,
}
response = requests.get(url, params=params, headers=headers)
response.raise_for_status()
return response.json(), response.headers

def collect_current_data(url, headers):
all_records = []

# Fetch page 1 to get total pages.
records, page1_headers = fetch_page(url, headers, page=1)
total_pages = int(page1_headers.get('X-Pages', 1))
all_records.extend(records)

# Fetch remaining pages.
for page in range(2, total_pages + 1):
print(f"Fetching page {page}/{total_pages}...")
records, _ = fetch_page(url, headers, page=page)

all_records.extend(records)

print(f" Found {len(records)} records; total {len(all_records)} records")
time.sleep(0.1) # Throttle requests to avoid rate limiting.

print(f"Total: {len(all_records)} records across {total_pages} pages")
return all_records

if __name__ == "__main__":
url = "https://esi.evetech.net/..." # ... (replace with actual route)
headers = {
"User-Agent": "ESI-Example/1.0",
"X-Compatibility-Date": "2025-09-30",
}

all_data = collect_current_data(url, headers)
# ... (do something with all_data)