Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
a8d2eb5
Merge pull request #743 from opsmill/develop
ogenstad Jan 9, 2026
d619c6f
IHS-183: Fix typing errors for protocols (#749) (#752)
infrahub-github-bot-app[bot] Jan 21, 2026
d6404eb
Merge pull request #755 from opsmill/develop
ogenstad Jan 22, 2026
56b59aa
Revert "IHS-183: Fix typing errors for protocols (#749)" (#760) (#762)
infrahub-github-bot-app[bot] Jan 22, 2026
7cf8924
Merge pull request #770 from opsmill/develop
ogenstad Jan 27, 2026
424e0ca
Merge pull request #787 from opsmill/develop
ogenstad Jan 29, 2026
2c07085
Merge pull request #796 from opsmill/develop
ogenstad Feb 4, 2026
fdebb9d
Merge pull request #800 from opsmill/develop
ogenstad Feb 5, 2026
504a618
IHS-193 Add support for file upload/download for `CoreFileObject` (#792)
gmazoyer Feb 6, 2026
2e6c3f8
Fix infrahubctl proposed change table generation (#805)
gmazoyer Feb 6, 2026
829800d
IFC-2184: Add to branch status (#794)
solababs Feb 9, 2026
c340321
migrate from pre-commit to prek (#789) (#808)
infrahub-github-bot-app[bot] Feb 9, 2026
9c1a8bf
Merge branch 'develop' into 'infrahub-develop' with resolved conflicts
ogenstad Feb 10, 2026
9e42f1f
Use ternary operator if ALIAS_KEY in value and value[ALIAS_KEY] else …
ogenstad Feb 10, 2026
0abb16d
Add integration tests for file objects (#802)
gmazoyer Feb 10, 2026
84682b1
Merge pull request #813 from opsmill/pog-develop-to-infrahub-develop-…
ogenstad Feb 10, 2026
62fcd75
Merge pull request #814 from opsmill/pog-SIM108
ogenstad Feb 11, 2026
652bc25
Merge pull request #817 from opsmill/develop
ogenstad Feb 11, 2026
54f3a4c
Linting: Incorrect import of `pytest`; use `import pytest` instead
ogenstad Feb 13, 2026
8959d12
Merge pull request #824 from opsmill/pog-PT013
ogenstad Feb 13, 2026
91ff77c
Avoid + operator to concatenate collections
ogenstad Feb 16, 2026
9b341cd
Break apart ty violations into smaller components
ogenstad Feb 16, 2026
e1ad3d7
Merge pull request #826 from opsmill/pog-RUF005
ogenstad Feb 17, 2026
0176b8e
Merge pull request #828 from opsmill/pog-break-ty-rules
ogenstad Feb 17, 2026
d1f6584
Merge pull request #830 from opsmill/develop
ogenstad Feb 17, 2026
004fc67
Add py.typed
ogenstad Sep 10, 2025
809ea5e
Set version to 1.19.0b0
ogenstad Feb 18, 2026
f84bbd2
Merge pull request #543 from opsmill/pog-py.typed
ogenstad Feb 18, 2026
0587e7f
Improve schema export output readability
BeArchiTek Feb 18, 2026
8ab1c57
Preserve read_only on computed attributes in schema export
BeArchiTek Feb 18, 2026
fbf268a
Filter auto-generated relationships and restore hierarchical flag on …
BeArchiTek Feb 18, 2026
bbff1e4
Export generics before nodes; strip auto-generated uniqueness_constra…
BeArchiTek Feb 18, 2026
c2e5c92
add changelog
BeArchiTek Feb 18, 2026
91fe627
Refactor schema export logic into dedicated module and regenerate docs
BeArchiTek Feb 19, 2026
4937d1b
Fix invalid-argument-type in exception class
ogenstad Feb 19, 2026
0bf611d
Merge remote-tracking branch 'origin/develop' into merge-822-832-deve…
polmichel Feb 19, 2026
17edb63
Merge PRs 822 832 833 into develop: fixing linter
polmichel Feb 19, 2026
b29fcbb
docs: Fixing wrong documentation format related to Python code exampl…
polmichel Feb 19, 2026
0b99ab7
docs: Updating related SDK API docs IHS-193
polmichel Feb 19, 2026
3064a1f
Merge pull request #838 from opsmill/pog-exception-invalid-argument-type
ogenstad Feb 19, 2026
b417aaf
Fix invalid-argument-type violations
ogenstad Feb 19, 2026
7aeb880
Merge pull request #840 from opsmill/merge-822-832-833-develop-to-inf…
polmichel Feb 19, 2026
a48fa78
Merge pull request #841 from opsmill/pog-timestamp-argument
ogenstad Feb 19, 2026
e63783e
rework exporter function
Feb 19, 2026
5864562
Fix: Wrong type passed to first argument of `pytest.mark.parametrize`…
ogenstad Feb 19, 2026
df74df0
Reraise exceptions where applicable
ogenstad Feb 20, 2026
96c5051
Merge pull request #845 from opsmill/pog-B904
ogenstad Feb 20, 2026
f371427
Fix _default_export_directory return type to match Path annotation
Feb 20, 2026
56e0618
Merge origin/stable into bkr-add-schema-exporter
Feb 20, 2026
2eaaf98
Merge pull request #842 from opsmill/develop
ogenstad Feb 20, 2026
ecdacc6
Merge pull request #844 from opsmill/pog-PT006
ogenstad Feb 20, 2026
85dc280
following coderabbit and merge from stable
Feb 23, 2026
97a7e6b
Address PR review feedback from ogenstad and coderabbit
Feb 23, 2026
fd79f90
Replace nested dict return type with SchemaExport/NamespaceExport Pyd…
Feb 23, 2026
a17f18b
Merge pull request #851 from opsmill/develop
ogenstad Feb 24, 2026
5046fea
Rename namespace parameter to namespaces in schema export CLI
Feb 25, 2026
32b52cf
Regenerate infrahubctl schema docs after namespaces rename
Feb 25, 2026
2f0ce37
Merge pull request #835 from opsmill/bkr-add-schema-exporter
BeArchiTek Feb 26, 2026
af650de
Merge remote-tracking branch 'origin/develop' into pmi-20260226-merge…
polmichel Feb 26, 2026
dd38027
Adapted documentation to new rules in infrahub-develop
polmichel Feb 26, 2026
52d475e
Adapted test typing of merged tests from develop
polmichel Feb 26, 2026
1c1407d
Merge pull request #856 from opsmill/pmi-20260226-merge-develop-into-…
polmichel Feb 27, 2026
f186180
Merge pull request #857 from opsmill/develop
polmichel Feb 27, 2026
7390d88
Set version to 1.19.0 release candidate 0
ogenstad Feb 27, 2026
f737549
Merge pull request #858 from opsmill/pog-1.19.0rc0
ogenstad Feb 27, 2026
9b30a09
bump version to 1.19.0
Mar 16, 2026
4814eba
add CHANGELOG entry
Mar 16, 2026
fe5fe71
exclude test_schema_export.py from ty linting test
Mar 16, 2026
84ee9ef
Merge pull request #870 from opsmill/wvd-20260316-prep-release-1.19
wvandeun Mar 16, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .vale/styles/Infrahub/sentence-case.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ exceptions:
- Jinja
- Jinja2
- JWT
- MDX
- Namespace
- NATS
- Node
Expand Down
1 change: 1 addition & 0 deletions .vale/styles/spelling-exceptions.txt
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ kbps
Keycloak
Loopbacks
markdownlint
MDX
max_count
memgraph
menu_placement
Expand Down
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,24 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
This project uses [*towncrier*](https://towncrier.readthedocs.io/) and the changes for the upcoming release can be found in <https://github.com/opsmill/infrahub/tree/develop/infrahub/python_sdk/changelog/>.

<!-- towncrier release notes start -->

## [1.19.0](https://github.com/opsmill/infrahub-sdk-python/tree/v1.19.0) - 2026-03-16

### Added

- Added support for FileObject nodes with file upload and download capabilities. New methods `upload_from_path(path)` and `upload_from_bytes(content, name)` allow setting file content before saving, while `download_file(dest)` enables downloading files to memory or streaming to disk for large files. ([#ihs193](https://github.com/opsmill/infrahub-sdk-python/issues/ihs193))
- Python SDK API documentation is now generated directly from the docstrings of the classes, functions, and methods contained in the code. ([#201](https://github.com/opsmill/infrahub-sdk-python/issues/201))
- Added a 'py.typed' file to the project. This is to enable type checking when the Infrahub SDK is imported from other projects. The addition of this file could cause new typing issues in external projects until all typing issues have been resolved. Adding it to the project now to better highlight remaining issues.

### Changed

- Updated branch report command to use node metadata for proposed change creator information instead of the deprecated relationship-based approach. Requires Infrahub 1.7 or above.

### Fixed

- Allow SDK tracking feature to continue after encountering delete errors due to impacted nodes having already been deleted by cascade delete. ([#265](https://github.com/opsmill/infrahub-sdk-python/issues/265))
- Fixed Python SDK query generation regarding from_pool generated attribute value ([#497](https://github.com/opsmill/infrahub-sdk-python/issues/497))

## [1.18.1](https://github.com/opsmill/infrahub-sdk-python/tree/v1.18.1) - 2026-01-08

### Fixed
Expand Down
1 change: 1 addition & 0 deletions changelog/151.added.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add `infrahubctl schema export` command to export schemas from Infrahub.
1 change: 0 additions & 1 deletion changelog/201.added.md

This file was deleted.

1 change: 0 additions & 1 deletion changelog/265.fixed.md

This file was deleted.

1 change: 0 additions & 1 deletion changelog/497.fixed.md

This file was deleted.

6 changes: 3 additions & 3 deletions docs/AGENTS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# docs/AGENTS.md
# Documentation agents

Docusaurus documentation following Diataxis framework.

Expand Down Expand Up @@ -34,12 +34,12 @@ Sidebar navigation is dynamic: `sidebars-*.ts` files read the filesystem at buil

No manual sidebar update is needed when adding a new `.mdx` file. However, to control the display order of a new page, add its doc ID to the ordered list in the corresponding `sidebars-*.ts` file.

## Adding Documentation
## Adding documentation

1. Create MDX file in appropriate directory
2. Add frontmatter with `title`

## MDX Pattern
## MDX pattern

Use Tabs for async/sync examples, callouts for notes:

Expand Down
20 changes: 20 additions & 0 deletions docs/docs/infrahubctl/infrahubctl-schema.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ $ infrahubctl schema [OPTIONS] COMMAND [ARGS]...
**Commands**:

* `check`: Check if schema files are valid and what...
* `export`: Export the schema from Infrahub as YAML...
* `load`: Load one or multiple schema files into...

## `infrahubctl schema check`
Expand All @@ -40,6 +41,25 @@ $ infrahubctl schema check [OPTIONS] SCHEMAS...
* `--config-file TEXT`: [env var: INFRAHUBCTL_CONFIG; default: infrahubctl.toml]
* `--help`: Show this message and exit.

## `infrahubctl schema export`

Export the schema from Infrahub as YAML files, one per namespace.

**Usage**:

```console
$ infrahubctl schema export [OPTIONS]
```

**Options**:

* `--directory PATH`: Directory path to store schema files [default: (dynamic)]
* `--branch TEXT`: Branch from which to export the schema
* `--namespaces TEXT`: Namespace(s) to export (default: all user-defined)
* `--debug / --no-debug`: [default: no-debug]
* `--config-file TEXT`: [env var: INFRAHUBCTL_CONFIG; default: infrahubctl.toml]
* `--help`: Show this message and exit.

## `infrahubctl schema load`

Load one or multiple schema files into Infrahub.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/python-sdk/guides/client.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ Your client is now configured to use the specified default branch instead of `ma

## Hello world example

Let's create a simple "Hello World" example to verify your client configuration works correctly. This example will connect to your Infrahub instance and query the available accounts.
Let's create a "Hello World" example to verify your client configuration works correctly. This example will connect to your Infrahub instance and query the available accounts.

1. Create a new file called `hello_world.py`:

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/python-sdk/guides/python-typing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ infrahubctl graphql generate-return-types queries/get_tags.gql

### Example workflow

1. **Create your GraphQL queries** in `.gql` files preferably in a directory (e.g., `queries/`):
1. **Create your GraphQL queries** in `.gql` files preferably in a directory (for example, `queries/`):

```graphql
# queries/get_tags.gql
Expand Down
140 changes: 140 additions & 0 deletions docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/node.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,44 @@ artifact_generate(self, name: str) -> None
artifact_fetch(self, name: str) -> str | dict[str, Any]
```

#### `download_file`

```python
download_file(self, dest: Path | None = None) -> bytes | int
```

Download the file content from this FileObject node.

This method is only available for nodes that inherit from CoreFileObject.
The node must have been saved (have an id) before calling this method.

**Args:**

- `dest`: Optional destination path. If provided, the file will be streamed
directly to this path (memory-efficient for large files) and the
number of bytes written will be returned. If not provided, the
file content will be returned as bytes.

**Returns:**

- If ``dest`` is None: The file content as bytes.
- If ``dest`` is provided: The number of bytes written to the file.

**Raises:**

- `FeatureNotSupportedError`: If this node doesn't inherit from CoreFileObject.
- `ValueError`: If the node hasn't been saved yet or file not found.
- `AuthenticationError`: If authentication fails.

**Examples:**

```python
>>> # Download to memory
>>> content = await contract.download_file()
>>> # Stream to file (memory-efficient for large files)
>>> bytes_written = await contract.download_file(dest=Path("/tmp/contract.pdf"))
```

#### `delete`

```python
Expand Down Expand Up @@ -180,6 +218,44 @@ artifact_generate(self, name: str) -> None
artifact_fetch(self, name: str) -> str | dict[str, Any]
```

#### `download_file`

```python
download_file(self, dest: Path | None = None) -> bytes | int
```

Download the file content from this FileObject node.

This method is only available for nodes that inherit from CoreFileObject.
The node must have been saved (have an id) before calling this method.

**Args:**

- `dest`: Optional destination path. If provided, the file will be streamed
directly to this path (memory-efficient for large files) and the
number of bytes written will be returned. If not provided, the
file content will be returned as bytes.

**Returns:**

- If ``dest`` is None: The file content as bytes.
- If ``dest`` is provided: The number of bytes written to the file.

**Raises:**

- `FeatureNotSupportedError`: If this node doesn't inherit from CoreFileObject.
- `ValueError`: If the node hasn't been saved yet or file not found.
- `AuthenticationError`: If authentication fails.

**Examples:**

```python
>>> # Download to memory
>>> content = contract.download_file()
>>> # Stream to file (memory-efficient for large files)
>>> bytes_written = contract.download_file(dest=Path("/tmp/contract.pdf"))
```

#### `delete`

```python
Expand Down Expand Up @@ -373,6 +449,70 @@ is_ip_address(self) -> bool
is_resource_pool(self) -> bool
```

#### `is_file_object`

```python
is_file_object(self) -> bool
```

Check if this node inherits from CoreFileObject and supports file uploads.

#### `upload_from_path`

```python
upload_from_path(self, path: Path) -> None
```

Set a file from disk to be uploaded when saving this FileObject node.

The file will be streamed during upload, avoiding loading the entire file into memory.

**Args:**

- `path`: Path to the file on disk.

**Raises:**

- `FeatureNotSupportedError`: If this node doesn't inherit from CoreFileObject.

#### `upload_from_bytes`

```python
upload_from_bytes(self, content: bytes | BinaryIO, name: str) -> None
```

Set content to be uploaded when saving this FileObject node.

The content can be provided as bytes or a file-like object.
Using BinaryIO is recommended for large content to stream during upload.

**Args:**

- `content`: The file content as bytes or a file-like object.
- `name`: The filename to use for the uploaded file.

**Raises:**

- `FeatureNotSupportedError`: If this node doesn't inherit from CoreFileObject.

**Examples:**

```python
>>> # Using bytes (for small files)
>>> node.upload_from_bytes(content=b"file content", name="example.txt")
>>> # Using file-like object (for large files)
>>> with open("/path/to/file.bin", "rb") as f:
... node.upload_from_bytes(content=f, name="file.bin")
```

#### `clear_file`

```python
clear_file(self) -> None
```

Clear any pending file content.

#### `get_raw_graphql_data`

```python
Expand Down
20 changes: 10 additions & 10 deletions docs/docs/python-sdk/topics/object_file.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -68,13 +68,13 @@ spec:

> Multiple documents in a single YAML file are also supported, each document will be loaded separately. Documents are separated by `---`

### Data Processing Parameters
### Data processing parameters

The `parameters` field controls how the data in the object file is processed before loading into Infrahub:

| Parameter | Description | Default |
| -------------- | ------------------------------------------------------------------------------------------------------- | ------- |
| `expand_range` | When set to `true`, range patterns (e.g., `[1-5]`) in string fields are expanded into multiple objects. | `false` |
| Parameter | Description | Default |
| -------------- | -------------------------------------------------------------------------------------------------------------- | ------- |
| `expand_range` | When set to `true`, range patterns (for example, `[1-5]`) in string fields are expanded into multiple objects. | `false` |

When `expand_range` is not specified, it defaults to `false`.

Expand Down Expand Up @@ -208,9 +208,9 @@ Metadata support is planned for future releases. Currently, the Object file does
3. Validate object files before loading them into production environments.
4. Use comments in your YAML files to document complex relationships or dependencies.

## Range Expansion in Object Files
## Range expansion in object files

The Infrahub Python SDK supports **range expansion** for string fields in object files when the `parameters > expand_range` is set to `true`. This feature allows you to specify a range pattern (e.g., `[1-5]`) in any string value, and the SDK will automatically expand it into multiple objects during validation and processing.
The Infrahub Python SDK supports **range expansion** for string fields in object files when the `parameters > expand_range` is set to `true`. This feature allows you to specify a range pattern (for example, `[1-5]`) in any string value, and the SDK will automatically expand it into multiple objects during validation and processing.

```yaml
---
Expand All @@ -225,15 +225,15 @@ spec:
type: Country
```

### How Range Expansion Works
### How range expansion works

- Any string field containing a pattern like `[1-5]`, `[10-15]`, or `[1,3,5]` will be expanded into multiple objects.
- If multiple fields in the same object use range expansion, **all expanded lists must have the same length**. If not, validation will fail.
- The expansion is performed before validation and processing, so all downstream logic works on the expanded data.

### Examples

#### Single Field Expansion
#### Single field expansion

```yaml
spec:
Expand All @@ -256,7 +256,7 @@ This will expand to:
type: Country
```

#### Multiple Field Expansion (Matching Lengths)
#### Multiple field expansion (matching lengths)

```yaml
spec:
Expand All @@ -283,7 +283,7 @@ This will expand to:
type: Country
```

#### Error: Mismatched Range Lengths
#### Error: mismatched range lengths

If you use ranges of different lengths in multiple fields:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def _collapse_overloads(self, mdx_file: MdxFile) -> MdxFile:
h3_parsed = _parse_sections(h2.content, heading_level=3)
new_lines = h3_parsed.reassembled(processed_h3)
processed_h2.append(
MdxSection(name=h2.name, heading_level=h2.heading_level, _lines=[h2.heading] + new_lines)
MdxSection(name=h2.name, heading_level=h2.heading_level, _lines=[h2.heading, *new_lines])
)

new_content = "\n".join(parsed_h2.reassembled(processed_h2))
Expand All @@ -67,7 +67,7 @@ def _process_class_sections(self, h2_content: list[str]) -> list[ASection] | Non
h4_parsed = _parse_sections(h3.content, heading_level=4)
new_lines = h4_parsed.reassembled(collapsed_methods)
processed.append(
MdxSection(name=h3.name, heading_level=h3.heading_level, _lines=[h3.heading] + new_lines)
MdxSection(name=h3.name, heading_level=h3.heading_level, _lines=[h3.heading, *new_lines])
)

return processed if any_collapsed else None
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def content(self) -> list[str]: ...

@property
def lines(self) -> list[str]:
return [self.heading] + self.content
return [self.heading, *self.content]


@dataclass
Expand Down
1 change: 1 addition & 0 deletions infrahub_sdk/branch.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ class BranchStatus(str, Enum):
NEED_REBASE = "NEED_REBASE"
NEED_UPGRADE_REBASE = "NEED_UPGRADE_REBASE"
DELETING = "DELETING"
MERGED = "MERGED"


class BranchData(BaseModel):
Expand Down
Loading
Loading