| title | Overview |
|---|---|
| description | A tabular foundation model that delivers strong predictions in seconds — no dataset-specific training required. |
TabPFN is a pre-trained transformer trained on billions of synthetic datasets to “learn the learning process.” Instead of re-optimizing weights for every new dataset, TabPFN encodes inductive biases, priors, and optimization strategies and applies them to your data via in-context learning. That means one forward pass → high-quality predictions in seconds.
The fastest way to get started with TabPFN. Access our models through the cloud without requiring local GPU resources. Local installation for research and privacy-sensitive use cases with GPU support and a scikit-learn compatible interface. Solve binary or multi-class classification problems with calibrated probabilities. Estimate continuous values with uncertainty-aware outputs and minimal preprocessing. Model time series (via TabPFN for forecasting) to predict future values and trends. Detect rare and anomalous samples using TabPFN. Generate realistic synthetic tabular data with TabPFN. Optimize TabPFN models to your own data with fine-tuning. TabPFN reaches tuned-ensemble–level performance with near-instant training. Skip repeated training loops. Simply update the context and TabPFN performs zero-shot inference. Plug into any workflow with the familiar `scikit-learn` interface or through the Prior Labs API. Handles missing values, outliers, categorical & text features natively. Handles missing values, outliers, categorical & text features natively. Returns calibrated probabilities and integrates SHAP for explainable outcomes. Get up and running in minutes with step-by-step instructions