Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "2.0.0-alpha.7"
".": "2.0.0-alpha.8"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 43
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/togetherai%2Ftogetherai-e9e60279414ac3279c025d6318b5f67a8f6d01170e365612e791f3a1f259b94f.yml
openapi_spec_hash: 26c59292808c5ae9f222f95f056430cf
config_hash: 468163c38238466d0306b30eb29cb0d0
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/togetherai%2Ftogetherai-a1f15d8f8f7326616ea246a73d53bda093da7a9a5e3fe50a9ec6a5a0b958ec63.yml
openapi_spec_hash: 7a03e5140a9a6668ff42c47ea0d03a07
config_hash: 87a5832ab2ecefe567d22108531232f5
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 2.0.0-alpha.8 (2025-11-26)

Full Changelog: [v2.0.0-alpha.7...v2.0.0-alpha.8](https://github.com/togethercomputer/together-py/compare/v2.0.0-alpha.7...v2.0.0-alpha.8)

### Features

* **api:** api update ([49bb5d4](https://github.com/togethercomputer/together-py/commit/49bb5d4ba69ca118ecc34be2d69c4253665e2e81))
* **api:** Fix internal references for VideoJob spec ([fb5e7bb](https://github.com/togethercomputer/together-py/commit/fb5e7bb3dbaa9427d291de7440c201529b6cf528))


### Bug Fixes

* Address incorrect logic for `endpoint [command] --wait false` logic ([31236a9](https://github.com/togethercomputer/together-py/commit/31236a9df29c22fe7444c2dbb0d4bfc518bc79aa))


### Chores

* Remove incorrect file upload docs ([5bb847e](https://github.com/togethercomputer/together-py/commit/5bb847e33b55e5d0978c742e86cf931a2c08f919))
* Remove incorrect file upload docs ([bb97093](https://github.com/togethercomputer/together-py/commit/bb970938650b6f9580538528979221d142f74b6a))

## 2.0.0-alpha.7 (2025-11-26)

Full Changelog: [v2.0.0-alpha.6...v2.0.0-alpha.7](https://github.com/togethercomputer/together-py/compare/v2.0.0-alpha.6...v2.0.0-alpha.7)
Expand Down
50 changes: 0 additions & 50 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,22 +198,6 @@ chat_completion = client.chat.completions.create(
print(chat_completion.response_format)
```

## File uploads

Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.

```python
from pathlib import Path
from together import Together

client = Together()

client.files.upload(
file=Path("/path/to/file"),
purpose="fine-tune",
)
```

The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.

## Handling errors
Expand Down Expand Up @@ -482,40 +466,6 @@ with Together() as client:

## Usage – CLI

### Chat Completions

```bash
together chat.completions \
--message "system" "You are a helpful assistant named Together" \
--message "user" "What is your name?" \
--model mistralai/Mixtral-8x7B-Instruct-v0.1
```

The Chat Completions CLI enables streaming tokens to stdout by default. To disable streaming, use `--no-stream`.

### Completions

```bash
together completions \
"Large language models are " \
--model mistralai/Mixtral-8x7B-v0.1 \
--max-tokens 512 \
--stop "."
```

The Completions CLI enables streaming tokens to stdout by default. To disable streaming, use `--no-stream`.

### Image Generations

```bash
together images generate \
"space robots" \
--model stabilityai/stable-diffusion-xl-base-1.0 \
--n 4
```

The image is opened in the default image viewer by default. To disable this, use `--no-show`.

### Files

```bash
Expand Down
4 changes: 2 additions & 2 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,12 +134,12 @@ Methods:
Types:

```python
from together.types import VideoJob, VideoCreateResponse
from together.types import VideoJob
```

Methods:

- <code title="post /videos">client.videos.<a href="./src/together/resources/videos.py">create</a>(\*\*<a href="src/together/types/video_create_params.py">params</a>) -> <a href="./src/together/types/video_create_response.py">VideoCreateResponse</a></code>
- <code title="post /videos">client.videos.<a href="./src/together/resources/videos.py">create</a>(\*\*<a href="src/together/types/video_create_params.py">params</a>) -> <a href="./src/together/types/video_job.py">VideoJob</a></code>
- <code title="get /videos/{id}">client.videos.<a href="./src/together/resources/videos.py">retrieve</a>(id) -> <a href="./src/together/types/video_job.py">VideoJob</a></code>

# Audio
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "together"
version = "2.0.0-alpha.7"
version = "2.0.0-alpha.8"
description = "The official Python library for the together API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion src/together/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "together"
__version__ = "2.0.0-alpha.7" # x-release-please-version
__version__ = "2.0.0-alpha.8" # x-release-please-version
7 changes: 3 additions & 4 deletions src/together/lib/cli/api/endpoints.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,7 @@ def endpoints(ctx: click.Context) -> None:
help="Start endpoint in specified availability zone (e.g., us-central-4b)",
)
@click.option(
"--wait",
is_flag=True,
"--wait/--no-wait",
default=True,
help="Wait for the endpoint to be ready after creation",
)
Expand Down Expand Up @@ -272,7 +271,7 @@ def fetch_and_print_hardware_options(client: Together, model: str | None, print_

@endpoints.command()
@click.argument("endpoint-id", required=True)
@click.option("--wait", is_flag=True, default=True, help="Wait for the endpoint to stop")
@click.option("--wait/--no-wait", default=True, help="Wait for the endpoint to stop")
@click.pass_obj
@handle_api_errors
def stop(client: Together, endpoint_id: str, wait: bool) -> None:
Expand All @@ -293,7 +292,7 @@ def stop(client: Together, endpoint_id: str, wait: bool) -> None:

@endpoints.command()
@click.argument("endpoint-id", required=True)
@click.option("--wait", is_flag=True, default=True, help="Wait for the endpoint to start")
@click.option("--wait/--no-wait", default=True, help="Wait for the endpoint to start")
@click.pass_obj
@handle_api_errors
def start(client: Together, endpoint_id: str, wait: bool) -> None:
Expand Down
86 changes: 72 additions & 14 deletions src/together/lib/types/fine_tuning.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,6 @@ class FinetuneEventLevels(str, Enum):
INFO = "Info"
WARNING = "Warning"
ERROR = "Error"
LEGACY_INFO = "info"
LEGACY_IWARNING = "warning"
LEGACY_IERROR = "error"


class FinetuneEvent(BaseModel):
Expand All @@ -85,7 +82,7 @@ class FinetuneEvent(BaseModel):
# created at datetime stamp
created_at: Union[str, None] = None
# event log level
level: Union[FinetuneEventLevels, None] = None
level: Union[FinetuneEventLevels, str, None] = None
# event message string
message: Union[str, None] = None
# event type
Expand Down Expand Up @@ -120,7 +117,20 @@ class LoRATrainingType(BaseModel):
type: Literal["Lora"] = "Lora"


TrainingType: TypeAlias = Union[FullTrainingType, LoRATrainingType]
class UnknownTrainingType(BaseModel):
"""
Catch-all for unknown training types (forward compatibility).
Accepts any training type not explicitly defined.
"""

type: str


TrainingType: TypeAlias = Union[
FullTrainingType,
LoRATrainingType,
UnknownTrainingType,
]


class FinetuneFullTrainingLimits(BaseModel):
Expand Down Expand Up @@ -163,7 +173,19 @@ class TrainingMethodDPO(BaseModel):
simpo_gamma: Union[float, None] = None


TrainingMethod: TypeAlias = Union[TrainingMethodSFT, TrainingMethodDPO]
class TrainingMethodUnknown(BaseModel):
"""
Catch-all for unknown training methods (forward compatibility).
Accepts any training method not explicitly defined.
"""

method: str

TrainingMethod: TypeAlias = Union[
TrainingMethodSFT,
TrainingMethodDPO,
TrainingMethodUnknown,
]


class FinetuneTrainingLimits(BaseModel):
Expand All @@ -175,31 +197,67 @@ class FinetuneTrainingLimits(BaseModel):


class LinearLRSchedulerArgs(BaseModel):
"""
Linear learning rate scheduler arguments
"""

min_lr_ratio: Union[float, None] = 0.0


class CosineLRSchedulerArgs(BaseModel):
"""
Cosine learning rate scheduler arguments
"""

min_lr_ratio: Union[float, None] = 0.0
num_cycles: Union[float, None] = 0.5


class LinearLRScheduler(BaseModel):
"""
Linear learning rate scheduler
"""

lr_scheduler_type: Literal["linear"] = "linear"
lr_scheduler_args: Union[LinearLRSchedulerArgs, None] = None


class CosineLRScheduler(BaseModel):
"""
Cosine learning rate scheduler
"""

lr_scheduler_type: Literal["cosine"] = "cosine"
lr_scheduler_args: Union[CosineLRSchedulerArgs, None] = None


# placeholder for old fine-tuning jobs with no lr_scheduler_type specified
class EmptyLRScheduler(BaseModel):
"""
Empty learning rate scheduler

Placeholder for old fine-tuning jobs with no lr_scheduler_type specified
"""

lr_scheduler_type: Literal[""]
lr_scheduler_args: None = None

class UnknownLRScheduler(BaseModel):
"""
Unknown learning rate scheduler

Catch-all for unknown LR scheduler types (forward compatibility)
"""

lr_scheduler_type: str
lr_scheduler_args: Optional[Any] = None


FinetuneLRScheduler: TypeAlias = Union[LinearLRScheduler, CosineLRScheduler, EmptyLRScheduler]
FinetuneLRScheduler: TypeAlias = Union[
LinearLRScheduler,
CosineLRScheduler,
EmptyLRScheduler,
UnknownLRScheduler,
]


class FinetuneResponse(BaseModel):
Expand All @@ -213,17 +271,17 @@ class FinetuneResponse(BaseModel):
created_at: datetime
"""Creation timestamp of the fine-tune job"""

status: Optional[FinetuneJobStatus] = None
"""Status of the fine-tune job"""
status: Optional[Union[FinetuneJobStatus, str]] = None
"""Status of the fine-tune job (accepts known enum values or string for forward compatibility)"""

updated_at: datetime
"""Last update timestamp of the fine-tune job"""

batch_size: Optional[int] = None
"""Batch size used for training"""

events: Optional[List[FinetuneEvent]] = None
"""Events related to this fine-tune job"""
events: Optional[List[Union[FinetuneEvent, str]]] = None
"""Events related to this fine-tune job (accepts known enum values or string for forward compatibility)"""

from_checkpoint: Optional[str] = None
"""Checkpoint used to continue training"""
Expand Down Expand Up @@ -361,7 +419,7 @@ class FinetuneRequest(BaseModel):
# training learning rate
learning_rate: float
# learning rate scheduler type and args
lr_scheduler: Union[LinearLRScheduler, CosineLRScheduler, None] = None
lr_scheduler: Union[FinetuneLRScheduler, None] = None
# learning rate warmup ratio
warmup_ratio: float
# max gradient norm
Expand All @@ -387,7 +445,7 @@ class FinetuneRequest(BaseModel):
# training type
training_type: Union[TrainingType, None] = None
# training method
training_method: Union[TrainingMethodSFT, TrainingMethodDPO] = Field(default_factory=TrainingMethodSFT)
training_method: TrainingMethod = Field(default_factory=TrainingMethodSFT)
# from step
from_checkpoint: Union[str, None] = None
from_hf_model: Union[str, None] = None
Expand Down
8 changes: 4 additions & 4 deletions src/together/resources/fine_tuning.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ def delete(
self,
id: str,
*,
force: bool,
force: bool | Omit = omit,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -374,7 +374,7 @@ def content(
timeout: float | httpx.Timeout | None | NotGiven = not_given,
) -> BinaryAPIResponse:
"""
Download a compressed fine-tuned model or checkpoint.
Receive a compressed fine-tuned model or checkpoint.

Args:
ft_id: Fine-tune ID to download. A string that starts with `ft-`.
Expand Down Expand Up @@ -732,7 +732,7 @@ async def delete(
self,
id: str,
*,
force: bool,
force: bool | Omit = omit,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -815,7 +815,7 @@ async def content(
timeout: float | httpx.Timeout | None | NotGiven = not_given,
) -> AsyncBinaryAPIResponse:
"""
Download a compressed fine-tuned model or checkpoint.
Receive a compressed fine-tuned model or checkpoint.

Args:
ft_id: Fine-tune ID to download. A string that starts with `ft-`.
Expand Down
Loading