Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
31ca812
update documentation
raymondcen Apr 25, 2026
6abb551
removed table
raymondcen Apr 25, 2026
5366659
Added CONTRIBUTING.md and archeticture to README
raymondcen Apr 25, 2026
e19b9c0
update README.md
raymondcen Apr 26, 2026
9c1c4f7
fixed github username typo, added tessract note, removed duplicate te…
raymondcen Apr 26, 2026
de233ca
updated first line
raymondcen Apr 26, 2026
ad5d220
added more accuracte wording more specific to the project
raymondcen Apr 26, 2026
7ae06bb
removed "trained LLM" to "trained XGBoost and pretrained LLM"
raymondcen Apr 26, 2026
bf4f55e
removed team table, used GItHub profiles instead
raymondcen Apr 26, 2026
483061b
reformatted to include roles and github pages
raymondcen Apr 26, 2026
32b6dd9
update README for project landing page
SeanClay10 Apr 26, 2026
ee414d1
feat: add training curve graph
SeanClay10 Apr 26, 2026
4d9ee49
feat: add new architecture document
SeanClay10 Apr 26, 2026
7933e73
fix: add colors to architecture doc
SeanClay10 Apr 26, 2026
c3f70c1
fix: updated installation instructions
bradleyrule Apr 27, 2026
c597016
fix: add pipeline demo to README
SeanClay10 Apr 27, 2026
c0f6cbe
Merge branch 'landing-page' of https://github.com/marknovak/FracFeedE…
SeanClay10 Apr 27, 2026
6879879
fix: minor wording fix
SeanClay10 Apr 27, 2026
0b650c7
fix: README image updates
SeanClay10 Apr 27, 2026
805238d
fix: image size update
SeanClay10 Apr 27, 2026
72db306
fix: formatting fix
SeanClay10 Apr 27, 2026
0cdbabd
fix: captions
SeanClay10 Apr 27, 2026
1efe7da
fix: reorder README sections
SeanClay10 Apr 27, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
230 changes: 212 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,223 @@
# FracFeedExtractor - _LLMs for the fraction of feeding predators_
# FracFeedExtractor — LLMs for the Fraction of Feeding Predators

**An automated pipeline that reads ecological literature and extracts predator feeding-rate data — turning hundreds of PDFs into a structured, analysis-ready database.**

![Python Version](https://img.shields.io/badge/python-3.10%2B-blue?style=flat-square)
![Build Status](https://img.shields.io/badge/build-passing-brightgreen?style=flat-square)
![License](https://img.shields.io/badge/license-pending-lightgrey?style=flat-square)
[![GitHub Issues](https://img.shields.io/github/issues/NovakLabOSU/FracFeedExtractor?style=flat-square)](https://github.com/NovakLabOSU/FracFeedExtractor/issues)

*2025–2026 Oregon State University Senior Capstone Project, in collaboration with Mark Novak.*

[**→ Try It Yourself**](#get-started)

---

<p align="center">
<img src="assets/fraction-feeding-preds.jfif" width="50%" alt="Predator diet surveys form the foundation for estimating the fraction of feeding individuals across species."/>
</p>
<p align="center"><em>Predator diet surveys form the foundation for estimating the fraction of feeding individuals across species.</em></p>

## Project Description
This project will contribute to validating a novel metric of predator-prey interactions to inform ecosystem-based resource management and ecological theory. It will do so by using a global database of predator diet surveys to train large language models for the purpose of identifying additional publications and extracting key data to overcome the limitations that have hindered the empirical validation of the new metric thus far.

This project contributes to validating a novel metric of predator-prey interaction, the **fraction of feeding individuals**, that has the potential to inform ecosystem-based resource management and ecological theory at scale. Given a folder of PDFs from the ecological literature, our pipeline screens each paper with a trained XGBoost classifier, routes relevant papers to a locally-run LLM for structured data extraction, and exports a JSON with classification confidence and extraction provenance attached to every record, overcoming the data harvesting bottleneck that has hindered validation of this metric.

---

## What is the Fraction of Feeding Individuals?

The **fraction of feeding individuals** is defined as the proportion of predators found to have non-empty stomachs at the time of sampling. This is a quantity that can be obtained directly from routine predator diet surveys. Research from [Mark Novak's lab at Oregon State University](https://github.com/NovakLabOSU) has established that this metric is analytically linked to a species' metabolic demand, body size, temperature, mortality rate, extinction susceptibility, biological control effectiveness, and population resilience to perturbation, making it a powerful and underutilized parameter for ecosystem-based resource management.

Despite its potential, the metric is rarely used in practice. The underlying data exists across more than a century of published predator diet surveys, but harvesting it by hand from the primary literature is prohibitively slow at the scale required for meaningful cross-species analysis. FracFeedExtractor was built to solve that bottleneck: given a collection of PDFs, it automatically identifies which papers contain usable diet survey data and extracts the key numbers and covariates needed to compute the fraction of feeding individuals.

---

## Key Features

- **PDF Classification** — A trained XGBoost classifier identifies which scientific publications contain useful predator diet survey data, filtering out irrelevant papers before they reach the LLM.
- **Structured Data Extraction** — Automatically parses empty and non-empty stomach counts and key covariates (predator identity, survey location, survey year, and more) from tabular and narrative text.
- **Batch Processing** — Accepts a single PDF or an entire folder of PDFs in one command.
- **Provenance & Uncertainty Reporting** — Every result includes the classifier confidence score and an extraction provenance descriptor identifying the source sentence or table for each field, making downstream QA straightforward.
- **Locally-Run LLM** — The extraction model runs entirely on-device via [Ollama](https://ollama.com). Unpublished manuscripts and proprietary datasets never leave the researcher's environment.

---

## Motivation
Predator–prey interactions are central to ecosystem stability, yet a key parameter that quantifies predator-prey interaction strength—predator feeding rates—is rarely used in practice because the data required to estimate it are difficult to obtain. Our research has shown that the fraction of feeding individuals, defined as the proportion of predators with non‑empty stomachs, can be easily obtained from routine predator diet surveys and is analytically linked to a species' metabolic demand, body size, temperature, mortality rate, extinction susceptibility, biological control effectiveness, and population resilience to perturbations. To validate this metric for mainstream resource management and ecological theory, a scalable method is needed to harvest the untapped data that exists in the vast ecological literature.

The project will train large language models for two tasks: 1) classifying scientific publications as containing useful predator diet survey information, and 2) extracting the numbers of empty- and non-empty stomachs counted and key covariates (predator identity, survey location, survey year, etc.). By fine-tuning with a large database of hand-annotated publications containing diet surveys conducted across the globe over the last 135 years, the models will learn to recognize relevant publications and parse tabular and narrative data into structured fields. The resulting pipeline will enable the generation of a comprehensive, covariate‑rich database for subsequent analyses and applications.
Predator-prey interactions are central to ecosystem stability, yet predator feeding rates are rarely used in practice because the data required to estimate them are difficult to obtain at scale. To validate the fraction of feeding individuals metric for mainstream resource management and ecological theory, a scalable method is needed to harvest the untapped data that already exists in the vast ecological literature, accumulated over more than a century of field surveys conducted across the globe.

We trained an XGBoost classifier on the [FracFeed global database](https://github.com/marknovak/FracFeed_DB), a hand-annotated collection of predator diet surveys spanning 135 years and multiple continents, to recognize relevant publications so the LLM only processes papers likely to yield usable data. An LLM running locally via Ollama then extracts the numbers of empty and non-empty stomachs and key covariates from each relevant paper. The resulting pipeline enables the generation of a comprehensive database for subsequent analyses and applications.

---


## System Architecture

Our two-stage pipeline combines a lightweight classifier with a locally-run LLM to minimize cost and runtime at scale. The classifier acts as a gate — only papers it scores as useful proceed to the more expensive extraction step.

<p align="center">
<img src="assets/architecture.svg" width="100%" alt="Architecture diagram showing the FracFeedExtractor pipeline: PDF input flows through text extraction, cleaning, XGBoost classification, and LLM extraction to produce structured JSON and CSV output"/>
</p>

<p align="center"><em>Five-stage pipeline architecture. PDF files are preprocessed, filtered, and classified before useful papers proceed to LLM data extraction and structured output.</em></p>

The pipeline consists of the following components:

1. **PDF Text Extraction** — PyMuPDF parses each PDF; Tesseract OCR handles scanned documents.
2. **Text Cleaning & Section Filtering** — References, captions, and irrelevant paragraphs are stripped to reduce noise before classification.
3. **XGBoost Classifier** — TF-IDF features feed a trained XGBoost model that scores each paper as useful or not useful with a confidence score.
4. **LLM Extraction** — Relevant papers are passed to a locally-run LLM (via Ollama) with a structured prompt, returning a `PredatorDietMetrics` JSON object containing stomach counts, predator identity, survey location, and survey year.
5. **Output** — Per-paper JSON files and a pipeline summary CSV are written to `data/results/`.

---

## Pipeline Demo

Below is a condensed view of a typical pipeline run on a folder of PDFs. The classifier scores each paper and routes it while relevant papers proceed to LLM extraction.

<p align="center">
<img src="assets/terminal_demo.svg" width="100%" alt="Terminal output showing FracFeedExtractor classifying four PDFs: three marked useful with extracted species data, one marked not useful and skipped"/>
</p>
<p align="center"><em>FracFeedExtractor pipeline run on a folder of PDFs.</em></p>

---

## Model Performance

The classifier was evaluated on a held-out test set of 234 papers. It achieves **94% accuracy** across both relevant and irrelevant publications, with strong and balanced precision and recall.

| Class | Precision | Recall | F1-score | Support |
|---|---|---|---|---|
| Not useful (0) | 0.96 | 0.91 | 0.93 | 110 |
| Useful (1) | 0.92 | 0.97 | 0.94 | 124 |
| **Overall** | **0.94** | **0.94** | **0.94** | **234** |

<p align="center">
<img src="assets/training_curve.png" width="625" alt="XGBoost training curve showing train and validation log-loss converging over 585 boosting rounds, with minimum validation loss of 0.193 at the best iteration"/>
</p>

<p align="center"><em>XGBoost classifier training curve. Log-loss for train (blue) and validation (dashed orange) sets across 600 boosting rounds. Early stopping selected round 585 as the best iteration (min val loss: 0.193).</em></p>

---

## Get Started

### Prerequisites

| Dependency | Notes |
|---|---|
| Python 3.10+ | Tested on 3.10–3.12 |
| [Ollama](https://ollama.com) | Must be running locally; 8 GB RAM minimum, 16 GB recommended |
| Tesseract OCR | System-level install required for scanned PDFs — see [Contributing Guide](documentation/CONTRIBUTING.md) for platform-specific instructions |

Pull the default extraction model before running:

```bash
ollama pull qwen2.5:7b # ~5 GB
ollama list
```

### Installation

```bash
# Linux
git clone https://github.com/NovakLabOSU/FracFeedExtractor.git
cd FracFeedExtractor
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```

```bash
# Windows PowerShell
git clone https://github.com/NovakLabOSU/FracFeedExtractor.git
cd FracFeedExtractor
py -m venv venv
./venv/Scripts/activate
pip install -r requirements.txt
```

### Quick Start

```bash
# Classify and extract from a folder of PDFs
python classify_extract.py path/to/pdfs/

# Adjust the LLM model or confidence threshold
python classify_extract.py path/to/pdfs/ --llm-model llama3.1:8b --confidence-threshold 0.70
```

Results are written to `data/results/metrics/` (per-paper JSON) and `data/results/summaries/` (pipeline CSV).

> For virtual environment setup, full CLI flag reference, and contribution guidelines, see the [Contributing Guide](documentation/CONTRIBUTING.md).

---

## Data Source

We trained the classifier on the [FracFeed global database](https://github.com/marknovak/FracFeed_DB) — a hand-annotated collection of predator diet surveys from the primary ecological literature.

---

## Team

<table>
<tr>
<td align="center">
<a href="https://github.com/marknovak">
<img src="https://github.com/marknovak.png" width="80px" alt="GitHub avatar for Mark Novak"/>
</a><br/>
<b>Mark Novak</b><br/>
<sub>Project Lead</sub><br/>
<sub><a href="mailto:Mark.Novak@oregonstate.edu">Mark.Novak@oregonstate.edu</a></sub>
</td>
<td align="center">
<a href="https://github.com/SeanClay10">
<img src="https://github.com/SeanClay10.png" width="80px" alt="GitHub avatar for Sean Clayton"/>
</a><br/>
<b>Sean Clayton</b><br/>
<sub>ML Pipeline &amp; Backend</sub><br/>
<sub><a href="mailto:claytose@oregonstate.edu">claytose@oregonstate.edu</a></sub>
</td>
<td align="center">
<a href="https://github.com/QuiteRocks">
<img src="https://github.com/QuiteRocks.png" width="80px" alt="GitHub avatar for Zahra Alsulaimawi"/>
</a><br/>
<b>Zahra Alsulaimawi</b><br/>
<sub>LLM Integration &amp; Evaluation</sub><br/>
<sub><a href="mailto:alsulaza@oregonstate.edu">alsulaza@oregonstate.edu</a></sub>
</td>
<td align="center">
<a href="https://github.com/raymondcen">
<img src="https://github.com/raymondcen.png" width="80px" alt="GitHub avatar for Raymond Cen"/>
</a><br/>
<b>Raymond Cen</b><br/>
<sub>Data Processing &amp; Testing</sub><br/>
<sub><a href="mailto:cenra@oregonstate.edu">cenra@oregonstate.edu</a></sub>
</td>
<td align="center">
<a href="https://github.com/bradleyrule">
<img src="https://github.com/bradleyrule.png" width="80px" alt="GitHub avatar for Bradley Rule"/>
</a><br/>
<b>Bradley Rule</b><br/>
<sub>PDF Extraction &amp; OCR</sub><br/>
<sub><a href="mailto:ruleb@oregonstate.edu">ruleb@oregonstate.edu</a></sub>
</td>
</tr>
</table>

---

## Questions and Feedback

Found a bug or have a question?
[Open an issue on GitHub](https://github.com/NovakLabOSU/FracFeedExtractor/issues)

## Objectives/Deliverables
1. A fully trained, fine‑tuned Python implementation of a large language model (or pair of models) that ingests a publication's pdf and returns a classification and/or the extracted data as well as descriptors of the classification and extraction provenance and uncertainty.
2. A Python pipeline that accepts a single pdf or a folder of pdfs, parses the text of each, queries the model for each, and exports the classification and data extraction results with clear provenance and uncertainty.
3. A clean, reproducible training and evaluation pipeline (including pdf preprocessing and model evaluation metrics) documented in a GitHub repository.
4. A technical report detailing model architecture, training procedure, validation results, and guidance for future extensions.
---

## Data sources
[FracFeed: Global database of the fraction of feeding predators](https://github.com/marknovak/FracFeed_DB)
## Documentation

## Team Members
- Mark Novak – Project Owner/Lead
- Sean Clayton – Contributor
- Zahra Zahir Ahmed Alsulaimawi – Contributor
- Raymond Cen – Contributor
- Bradley Rule – Contributor
- [Contributing Guide](documentation/CONTRIBUTING.md) — setup, CLI reference, and contribution workflow
- [System Architecture Diagram](assets/architecture.svg)

License: Pending partner confirmation
*License: Pending partner confirmation.*
Loading
Loading