Skip to content

Commit ddc68e6

Browse files
feat(images): add Ollama images provider with flexible parameter validation (#119)
* refactor(parameters): simplify validation to pass through unconstrained params When a parameter has no constraint defined in the model's parameter_constraints, pass the value through without error instead of raising UnsupportedParameterError. This enables dynamic providers like Ollama where we cannot maintain a registry of all models and their constraints. Constraints still validate when present. * refactor(openresponses): remove model_post_init constraint injection The model_post_init method that injected default constraints for unregistered models is no longer needed. With the new pass-through behavior, unconstrained parameters are forwarded to the provider without validation. * docs(parameters): clarify that unconstrained params pass through Update docstrings to accurately describe the new behavior: constraints validate parameter values when defined, and unconstrained parameters pass through without validation. * test(parameters): update test to expect pass-through behavior Rename and update the test to verify that missing constraints result in pass-through behavior rather than raising UnsupportedParameterError. * feat(ollama): add generate API mixin for image generation Add the Ollama generate API client mixin that handles HTTP requests, response parsing, and streaming for Ollama's /api/generate endpoint. Includes: - OllamaGenerateClient mixin with request/response handling - Parameter mappers for width, height, steps, seed, negative_prompt - Streaming support via OllamaGenerateStream - Configuration for endpoints and base URL * feat(images): add NEGATIVE_PROMPT parameter Add NEGATIVE_PROMPT to ImageParameter enum and ImageParameters TypedDict to support negative prompts in image generation (used by Ollama and others). * feat(images/ollama): add Ollama images client Add OllamaImagesClient for image generation using Ollama's local models. Includes: - OllamaImagesClient extending both OllamaGenerateClient and ImagesClient - AspectRatioMapper for aspect ratio parameter handling - Empty models.py (Ollama models are dynamic/unregistered) * feat(images): register Ollama provider Register OllamaImagesClient in the PROVIDERS dict so it can be used via celeste.Images(model="...", provider=Provider.OLLAMA). Also fixes provider string formatting in _resolve_model warning message. * feat(ollama): add NDJSON streaming support for image generation - Implement `stream_post_ndjson` in `HTTPClient` to handle Ollama's native NDJSON format. - Add `OllamaGenerateStream` mixin for parsing NDJSON chunks. - Enable streaming in `OllamaGenerateClient`. - Implement `OllamaImagesStream` and hook it into `OllamaImagesClient`. * docs(images): add notebook with generation, editing, analysis, and Ollama streaming * chore: bump version to 0.9.3
1 parent 26a2a87 commit ddc68e6

20 files changed

Lines changed: 850 additions & 40 deletions

File tree

Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Celeste AI - Working With Images\n",
8+
"\n",
9+
"Unified interface for image **generation**, **editing**, and **analysis** across providers.\n",
10+
"\n",
11+
"Star on GitHub 👉 [withceleste/celeste-python](https://github.com/withceleste/celeste-python)"
12+
]
13+
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"## Setup"
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"metadata": {},
24+
"source": [
25+
"import celeste\n",
26+
"from IPython.display import Image, display"
27+
],
28+
"outputs": [],
29+
"execution_count": null
30+
},
31+
{
32+
"cell_type": "markdown",
33+
"metadata": {},
34+
"source": [
35+
"---\n",
36+
"\n",
37+
"## Generate\n",
38+
"\n",
39+
"Create images from text prompts."
40+
]
41+
},
42+
{
43+
"cell_type": "code",
44+
"metadata": {},
45+
"source": [
46+
"img_gen_result = await celeste.images.generate(\n",
47+
" \"A nano banana on the beach\",\n",
48+
" model=\"gemini-2.5-flash-image\",\n",
49+
")"
50+
],
51+
"outputs": [],
52+
"execution_count": null
53+
},
54+
{
55+
"cell_type": "code",
56+
"metadata": {},
57+
"source": [
58+
"display(Image(data=img_gen_result.content.data))"
59+
],
60+
"outputs": [],
61+
"execution_count": null
62+
},
63+
{
64+
"cell_type": "markdown",
65+
"metadata": {},
66+
"source": [
67+
"---\n",
68+
"\n",
69+
"## Edit\n",
70+
"\n",
71+
"Modify existing images with text instructions."
72+
]
73+
},
74+
{
75+
"cell_type": "code",
76+
"metadata": {},
77+
"source": [
78+
"img_edit_result = await celeste.images.edit(\n",
79+
" image=img_gen_result.content,\n",
80+
" prompt=\"Make it night time\",\n",
81+
" model=\"gemini-2.5-flash-image\",\n",
82+
")"
83+
],
84+
"outputs": [],
85+
"execution_count": null
86+
},
87+
{
88+
"cell_type": "code",
89+
"metadata": {},
90+
"source": [
91+
"display(Image(data=img_edit_result.content.data))"
92+
],
93+
"outputs": [],
94+
"execution_count": null
95+
},
96+
{
97+
"cell_type": "markdown",
98+
"metadata": {},
99+
"source": [
100+
"---\n",
101+
"\n",
102+
"## Analyze\n",
103+
"\n",
104+
"Extract information from images."
105+
]
106+
},
107+
{
108+
"cell_type": "code",
109+
"metadata": {},
110+
"source": [
111+
"analyze_result = await celeste.images.analyze(\n",
112+
" prompt=\"What fruit is in this image and what color is it?\",\n",
113+
" image=img_gen_result.content,\n",
114+
" model=\"gemini-2.5-flash-lite\",\n",
115+
")"
116+
],
117+
"outputs": [],
118+
"execution_count": null
119+
},
120+
{
121+
"cell_type": "code",
122+
"metadata": {},
123+
"source": [
124+
"print(analyze_result.content)"
125+
],
126+
"outputs": [],
127+
"execution_count": null
128+
},
129+
{
130+
"cell_type": "markdown",
131+
"metadata": {
132+
"ExecuteTime": {
133+
"end_time": "2026-01-12T21:53:45.320259Z",
134+
"start_time": "2026-01-12T21:53:45.313508Z"
135+
}
136+
},
137+
"source": [
138+
"---\n",
139+
"\n",
140+
"## Local Generation with Ollama\n",
141+
"\n",
142+
"Generate images locally using Ollama. No API key needed."
143+
]
144+
},
145+
{
146+
"cell_type": "markdown",
147+
"metadata": {},
148+
"source": [
149+
"1. Start the server (in a terminal):\n",
150+
"```bash\n",
151+
"ollama serve\n",
152+
"```\n",
153+
"\n",
154+
"2. Pull the image model:\n",
155+
"```bash\n",
156+
"ollama pull x/flux2-klein\n",
157+
"```"
158+
]
159+
},
160+
{
161+
"cell_type": "code",
162+
"metadata": {},
163+
"source": [
164+
"prompt = \"A blurry iPhone-style photograph showing the window of a moving train. Through the window, a scenic landscape appears: tall green cliffs running alongside a river, with a small European village built on the slopes. The motion blur suggests the train is moving quickly, with soft reflections on the glass, natural daylight, and a casual handheld phone-camera aesthetic. Sharp textures where possible, rich colors, and a realistic sense of depth and distance.\"\n",
165+
"\n",
166+
"local_result = await celeste.images.generate(\n",
167+
" prompt=prompt,\n",
168+
" model=\"x/flux2-klein\",\n",
169+
" provider=\"ollama\",\n",
170+
" steps=1,\n",
171+
")\n",
172+
"display(Image(data=local_result.content.data))"
173+
],
174+
"outputs": [],
175+
"execution_count": null
176+
},
177+
{
178+
"metadata": {},
179+
"cell_type": "markdown",
180+
"source": [
181+
"---\n",
182+
"\n",
183+
"## Streaming (Ollama)\n",
184+
"\n",
185+
"Ollama streams NDJSON progress events. Celeste exposes these as image stream chunks with `metadata` (progress) and a final chunk containing the image."
186+
]
187+
},
188+
{
189+
"cell_type": "code",
190+
"metadata": {},
191+
"source": [
192+
"from tqdm.asyncio import tqdm\n",
193+
"\n",
194+
"steps = 4\n",
195+
"\n",
196+
"stream = celeste.images.stream.generate(\n",
197+
" prompt=prompt,\n",
198+
" model=\"x/flux2-klein\",\n",
199+
" provider=\"ollama\",\n",
200+
" steps=steps,\n",
201+
")\n",
202+
"\n",
203+
"async for chunk in tqdm(stream, total=steps+1):\n",
204+
" pass\n",
205+
"\n",
206+
"display(Image(data=chunk.content.data))"
207+
],
208+
"outputs": [],
209+
"execution_count": null
210+
},
211+
{
212+
"metadata": {},
213+
"cell_type": "markdown",
214+
"source": [
215+
"---\n",
216+
"Star on GitHub 👉 [withceleste/celeste-python](https://github.com/withceleste/celeste-python)"
217+
]
218+
},
219+
{
220+
"metadata": {},
221+
"cell_type": "code",
222+
"outputs": [],
223+
"execution_count": null,
224+
"source": ""
225+
}
226+
],
227+
"metadata": {
228+
"kernelspec": {
229+
"display_name": "Python 3 (ipykernel)",
230+
"language": "python",
231+
"name": "python3"
232+
},
233+
"language_info": {
234+
"codemirror_mode": {
235+
"name": "ipython",
236+
"version": 3
237+
},
238+
"file_extension": ".py",
239+
"mimetype": "text/x-python",
240+
"name": "python",
241+
"nbconvert_exporter": "python",
242+
"pygments_lexer": "ipython3",
243+
"version": "3.13.3"
244+
}
245+
},
246+
"nbformat": 4,
247+
"nbformat_minor": 4
248+
}

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "celeste-ai"
3-
version = "0.9.2"
3+
version = "0.9.3"
44
description = "Open source, type-safe primitives for multi-modal AI. All capabilities, all providers, one interface"
55
authors = [{name = "Kamilbenkirane", email = "kamil@withceleste.ai"}]
66
readme = "README.md"

src/celeste/__init__.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ def _resolve_model(
113113
msg = f"Model '{model}' not registered. Specify 'modality' explicitly."
114114
raise ValueError(msg)
115115
warnings.warn(
116-
f"Model '{model}' not registered in Celeste for provider {provider.value}. "
116+
f"Model '{model}' not registered in Celeste for provider {provider}. "
117117
"Parameter validation disabled.",
118118
UserWarning,
119119
stacklevel=3,
@@ -209,11 +209,12 @@ def create_client(
209209
resolved_operation = (
210210
Operation(operation) if isinstance(operation, str) else operation
211211
)
212+
resolved_provider = Provider(provider) if isinstance(provider, str) else provider
212213

213214
resolved_model = _resolve_model(
214215
modality=resolved_modality,
215216
operation=resolved_operation,
216-
provider=provider,
217+
provider=resolved_provider,
217218
model=model,
218219
)
219220

src/celeste/http.py

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,40 @@ async def stream_post(
191191
except json.JSONDecodeError:
192192
continue # Skip non-JSON control messages (provider-agnostic)
193193

194+
async def stream_post_ndjson(
195+
self,
196+
url: str,
197+
headers: dict[str, str],
198+
json_body: dict[str, Any],
199+
timeout: float = DEFAULT_TIMEOUT,
200+
) -> AsyncIterator[dict[str, Any]]:
201+
"""Stream POST request using NDJSON (newline-delimited JSON).
202+
203+
Unlike SSE (stream_post), NDJSON returns one JSON object per line.
204+
Used by Ollama's native API.
205+
206+
Args:
207+
url: API endpoint URL.
208+
headers: HTTP headers (including authentication).
209+
json_body: JSON request body.
210+
timeout: Timeout in seconds (default: DEFAULT_TIMEOUT).
211+
212+
Yields:
213+
Parsed JSON objects from NDJSON stream.
214+
"""
215+
client = await self._get_client()
216+
async with client.stream(
217+
"POST",
218+
url,
219+
json=json_body,
220+
headers=headers,
221+
timeout=timeout,
222+
) as response:
223+
response.raise_for_status()
224+
async for line in response.aiter_lines():
225+
if line:
226+
yield json.loads(line)
227+
194228
async def aclose(self) -> None:
195229
"""Close HTTP client and cleanup all connections."""
196230
if self._client is not None:

src/celeste/modalities/images/parameters.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ class ImageParameter(StrEnum):
1616
WATERMARK = "watermark"
1717
REFERENCE_IMAGES = "reference_images"
1818
PROMPT_UPSAMPLING = "prompt_upsampling"
19+
NEGATIVE_PROMPT = "negative_prompt"
1920
SEED = "seed"
2021
SAFETY_TOLERANCE = "safety_tolerance"
2122
OUTPUT_FORMAT = "output_format"
@@ -34,6 +35,7 @@ class ImageParameters(Parameters):
3435
watermark: bool
3536
reference_images: list[ImageArtifact]
3637
prompt_upsampling: bool
38+
negative_prompt: str
3739
seed: int
3840
safety_tolerance: int
3941
output_format: str

src/celeste/modalities/images/providers/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,13 @@
66
from .bfl import BFLImagesClient
77
from .byteplus import BytePlusImagesClient
88
from .google import GoogleImagesClient
9+
from .ollama import OllamaImagesClient
910
from .openai import OpenAIImagesClient
1011

1112
PROVIDERS: dict[Provider, type[ImagesClient]] = {
1213
Provider.BFL: BFLImagesClient,
1314
Provider.BYTEPLUS: BytePlusImagesClient,
1415
Provider.GOOGLE: GoogleImagesClient,
16+
Provider.OLLAMA: OllamaImagesClient,
1517
Provider.OPENAI: OpenAIImagesClient,
1618
}
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
"""Ollama provider for images modality."""
2+
3+
from .client import OllamaImagesClient
4+
from .models import MODELS
5+
6+
__all__ = ["MODELS", "OllamaImagesClient"]

0 commit comments

Comments
 (0)