Conversation
Merge stable into develop
Merge develop into infrahub-develop
* IHS-183: Fix typing errors for protocols * fix ty * fix failing tests * address comments
Merge stable into develop
Merge develop into infrahub-develop
Merge stable into develop
Merge develop into infrahub-develop
Merge stable into develop
Merge stable into develop
Merge develop into infrahub-develop
* Upgrade infrahub-testcontainers to 1.7 * Remove XFAIL markers on tests requiring Infrahub 1.7
Merge develop into infrahub-develop
Merge stable into develop
Merge develop into infrahub-develop
This change adds support to create, update and retrieve nodes which schemas implement `CoreFileObject`.
The proposed user exposed API follows these key points:
- `node.upload_from_path(path)` to select a file from disk for upload (streams during upload)
- `node.upload_from_bytes(content, name)` to set content for upload (supports bytes or BinaryIO)
- `node.download_file(dest)` to download a file to memory or stream to disk
Being able to stream a file to disk or from a disk is important in order to support large files and to avoid them being loaded completely into memory (which would be problematic for +1GB files in general).
The choice of using `upload_from_path` and `upload_from_bytes` is to prevent a collision with a potential attribute or relationship called `file` in the schema. That is also the reason why the `file` GraphQL parameter is outside the `data` one instead of inside it.
Here we introduce a couple of components to try to make our code SOLID (it's not much for now, but it's honest work):
- `FileHandler` / `FileHandlerSync` dedicated classes for file I/O operations
- `MultipartBuilder` GraphQL Multipart Request Spec payload building
It sure won't make our code SOLID but it won't add to the burden for now.
So given the user who loaded a schema, using our SDK will look like:
#### Upload a file when creating a node
```python
from pathlib import Path
from infrahub_sdk import InfrahubClientSync
client = InfrahubClientSync()
contract = client.create(
kind="NetworkCircuitContract", contract_start="2026-01-01", contract_end="2026-12-31"
)
# Option 1: Select file from disk (will be streamed during upload)
contract.upload_from_path(path=Path("/tmp/contract.pdf"))
# Option 2: Upload from memory (for small files or dynamically generated content)
contract.upload_from_bytes(content=b"file content", name="contract.pdf")
# Save as usual
contract.save()
```
#### Download a file from a node
```python
from pathlib import Path
contract = client.get(kind="NetworkCircuitContract", id="abc123")
# Download to memory (suitable for small files)
content = contract.download_file()
# Or stream directly to disk (suitable for large files)
bytes_written = contract.download_file(dest=Path("/tmp/downloaded.pdf"))
```
This was broken because the API server will now expose the `created_by` and `created_at` values inside the node metadata. Also add `get_node_metadata` to `CoreNodeBase` protocol.
Merge stable into develop with resolved conflicts
New /pre-ci & /feedback commands for Claude
…850) * IHS-156 fix SDK from_pool attribute management before querying GraphQL API * IHS-156 refactor the test cases to have a better view on the tests perimeter * towncrier regarding Github issue #497 * IHS-156 update AGENTS doc using the command /feedback * IHS-156 is_from_pool_attribute typing and naming * IHS-156 refactor generate_input_data * IHS-156 remove Jira issue related comment in the tests * IHS-156 removed a part of fixtures to upper level * IHS-156 tested all cases of generated_input_data * IHS-156 fixed a bug in test_relationship_from_pool.py * IHS-156 side effect bugs regarding from_pool attributes * IHS-156 last feedbacks regarding documentation and variable naming * IHS-156 fix typing error * IHS-156 renamed payload_dict -> payload
…-develop-into-infrahub-develop # Conflicts: # tests/unit/sdk/conftest.py
…infrahub-develop Manual merge from develop into infrahub-develop. This includes additional test-only typing and linting rules adaptation
Add doc regeneration rule to AGENTS.md
Merge develop into infrahub-develop
Set version to 1.19.0 release candidate 0
Deploying infrahub-sdk-python with
|
| Latest commit: |
fe5fe71
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://1206d273.infrahub-sdk-python.pages.dev |
| Branch Preview URL: | https://wvd-20260316-prep-release-1.infrahub-sdk-python.pages.dev |
WalkthroughAdds a docs generation system and MDX tooling (new docs/docs_generation package and many content-gen modules), expands the generated Python SDK reference (many new MDX files), and reorganizes Docusaurus sidebars (new docs/sidebars/* and sidebar-utils). Introduces file upload/download support across the SDK (MultipartBuilder, FileHandler, node upload/download methods, client multipart/streaming request paths), adds protocol/schema changes for CoreFileObject, updates CI to run docs unit tests and validation, introduces numerous unit and integration tests, and bumps project version to 1.19.0. |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
infrahub_sdk/pytest_plugin/plugin.py (1)
80-85:⚠️ Potential issue | 🟠 MajorUsername/password auth path is unreachable due to
hasattrchecks.Line 80 is effectively always true because these options are always present once registered, so the
elifbranch for username/password won’t run. This can incorrectly preferapi_token=Noneover provided credentials.Suggested fix
- if hasattr(session.config.option, "infrahub_key"): - client_config["api_token"] = session.config.option.infrahub_key - elif hasattr(session.config.option, "infrahub_username") and hasattr(session.config.option, "infrahub_password"): - client_config.pop("api_token") - client_config["username"] = session.config.option.infrahub_username - client_config["password"] = session.config.option.infrahub_password + if session.config.option.infrahub_key: + client_config["api_token"] = session.config.option.infrahub_key + elif session.config.option.infrahub_username and session.config.option.infrahub_password: + client_config["username"] = session.config.option.infrahub_username + client_config["password"] = session.config.option.infrahub_password🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrahub_sdk/pytest_plugin/plugin.py` around lines 80 - 85, The current branch selection uses hasattr against session.config.option which is always true for registered pytest options, making the username/password path unreachable; change the checks in the block that sets client_config to test the actual values instead of presence (e.g., use getattr(session.config.option, "infrahub_key", None) or truthiness of session.config.option.infrahub_key) and only set client_config["api_token"] when a non-empty token is provided, otherwise set client_config["username"] and client_config["password"] when both credentials are non-empty; update the logic around client_config.pop("api_token") so it only removes the token when switching to credential auth.docs/AGENTS.md (1)
1-37:⚠️ Potential issue | 🟠 MajorAdd required AGENTS metadata sections.
This AGENTS file should explicitly include agent name, purpose, responsibilities, inputs, outputs, and error handling sections so it stays self-descriptive for maintainers.
📝 Minimal structure to add
+# Agent metadata + +- **Agent name**: Documentation agent +- **Purpose**: ... +- **Responsibilities**: + - ... +- **Inputs**: + - ... +- **Outputs**: + - ... +- **Error handling**: + - ...As per coding guidelines "Maintain comprehensive agent documentation including agent name, purpose, responsibilities, inputs, outputs, and error handling".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/AGENTS.md` around lines 1 - 37, The AGENTS.md file is missing the required agent metadata sections; update docs/AGENTS.md (the AGENTS documentation page) to include clearly labeled headings for Agent Name, Purpose, Responsibilities, Inputs, Outputs, and Error Handling, and populate each section with a short descriptive paragraph or bullet list describing the agent (who/what it is), its primary goal (Purpose), what it must do or own (Responsibilities), expected inputs and their formats (Inputs), produced outputs and formats (Outputs), and how failures are detected and handled including retry/fallback and logging behavior (Error Handling); use consistent heading names (e.g., "Agent Name", "Purpose", "Responsibilities", "Inputs", "Outputs", "Error Handling") so the sidebar-utils/consumers can find and display the metadata.
🧹 Nitpick comments (13)
.vale/styles/GeneratedRef/spelling.yml (1)
6-6: Narrow thepyspelling filter to match only word boundaries.The regex
[pP]y.*\bmatches thepysubstring within common words likecopyandhappy path, filtering out text that may contain legitimate spelling issues. Anchor the pattern with word boundaries to match onlyPy*tokens:Suggested change
- - '[pP]y.*\b' + - '\b[Pp]y\w*\b'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.vale/styles/GeneratedRef/spelling.yml at line 6, The regex '[pP]y.*\b' in the spelling rule is too permissive and matches substrings inside words (e.g., "copy" or "happy path"); update the pattern used in .vale/styles/GeneratedRef/spelling.yml (the '[pP]y.*\b' entry) to anchor with word boundaries on both sides—for example use a pattern that matches whole tokens like '\b[pP]y\w*\b' so only standalone tokens beginning with "Py"/"py" are filtered.docs/sidebars/sidebar-utils.ts (1)
4-4: Use regex anchor to remove only the.mdxsuffix.
.replace('.mdx', '')removes the first occurrence of".mdx"anywhere in the filename; use/\.mdx$/to target only the trailing extension.♻️ Proposed refactor
- .map(file => file.replace('.mdx', '')) + .map(file => file.replace(/\.mdx$/, '')) @@ - const base = file.replace('.mdx', ''); + const base = file.replace(/\.mdx$/, '');Also applies to: Line 12
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/sidebars/sidebar-utils.ts` at line 4, The current map callback uses file.replace('.mdx', '') which strips the first occurrence anywhere; change the replace call to use a regex anchored to the end (e.g., /\.mdx$/) so only the trailing .mdx extension is removed. Update both occurrences where .map(file => file.replace('.mdx', '')) appears (the map callback) to call replace with the end-anchored regex.infrahub_sdk/node/attribute.py (1)
16-41: Consider immutability expectations with NamedTuple.The
_GraphQLPayloadAttributeis aNamedTuple, which typically implies immutability. However,add_propertiesmutatesself.payload(a dict) in place. While technically valid since dicts are mutable, this could be surprising to readers expecting NamedTuple semantics.Consider either:
- Accepting this as an internal implementation detail (current approach is functional)
- Using a
@dataclassinstead if mutation is intentionalThis is a minor observation - the current implementation works correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrahub_sdk/node/attribute.py` around lines 16 - 41, The _GraphQLPayloadAttribute currently subclasses NamedTuple but mutates its payload in add_properties, which conflicts with immutability expectations; convert _GraphQLPayloadAttribute into a dataclass (from typing import Any, dataclasses.dataclass) keeping the same fields payload, variables, needs_metadata and preserve to_dict and add_properties behavior so mutation is explicit, or alternatively change add_properties to return a new _GraphQLPayloadAttribute with combined payload/variables instead of mutating self (update references to add_properties accordingly); locate the class and methods _GraphQLPayloadAttribute, to_dict, and add_properties to implement the chosen approach.docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py (1)
42-48: Edge case: comment lines in doctest output may prematurely close the fence.Lines starting with
#inside doctest output (e.g.,# Expected output) will close the fence prematurely. If doctest examples include comment lines as part of the expected output, they would be incorrectly treated as fence terminators.Consider whether this edge case applies to your documentation. If not, this is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py` around lines 42 - 48, The current doctest-handling branch closes the fenced block when encountering lines starting with "#" which incorrectly treats comment lines in expected output as fence terminators; in the doctest handling code (the branch using the in_doctest variable and the result.append calls) change the condition so it no longer treats lines starting with "#" as a terminator — instead close the fence only on truly blank lines or when encountering a new doctest prompt (e.g., a line matching ^\s*(>>>|\.\.\.) ), so update the condition that currently checks line.startswith("#") to check for blank OR a doctest prompt and remove the check for leading "#" to avoid prematurely closing the fence.docs/docs/python-sdk/topics/object_file.mdx (1)
286-305: Present the range-length mismatch as a warning calloutThis is user-impacting error behavior and should be highlighted with
:::warningfor consistency and visibility.As per coding guidelines: Use callouts (
:::warning,:::note, etc.) for important notes and information in documentation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/python-sdk/topics/object_file.mdx` around lines 286 - 305, Wrap the "Error: mismatched range lengths" section in a warning callout (use :::warning ... :::) so the example YAML and the resulting error message ("Range expansion mismatch: fields expanded to different lengths: [3, 6]") are presented as a highlighted warning; update the block that starts with the "#### Error: mismatched range lengths" heading to be enclosed in the warning callout and ensure the YAML sample and the bash error snippet remain inside the callout for visibility..github/workflows/sync-docs.yml (1)
31-39: Consider removingsidebar-utils.tsbefore copying for consistency.The workflow removes the old
sidebars-python-sdk.tsandsidebars-infrahubctl.tsfiles before copying (lines 34-35), butsidebar-utils.tsis copied without first removing any existing version. For consistency and to avoid potential stale content issues if the file already exists in the target repo, consider adding a removal step.Suggested change
rm -rf target-repo/docs/docs-python-sdk/* rm -f target-repo/docs/sidebars-python-sdk.ts rm -f target-repo/docs/sidebars-infrahubctl.ts + rm -f target-repo/docs/sidebar-utils.ts cp -r source-repo/docs/docs/* target-repo/docs/docs-python-sdk/ cp source-repo/docs/sidebars/sidebars-infrahubctl.ts target-repo/docs/ cp source-repo/docs/sidebars/sidebars-python-sdk.ts target-repo/docs/ cp source-repo/docs/sidebars/sidebar-utils.ts target-repo/docs/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/sync-docs.yml around lines 31 - 39, The Sync folders step currently deletes old sidebars-python-sdk.ts and sidebars-infrahubctl.ts but does not remove an existing sidebar-utils.ts before copying, which can leave stale content; update the Sync folders step to remove the existing target-repo/docs/sidebar-utils.ts (e.g., add an rm -f target-repo/docs/sidebar-utils.ts line before copying) so that the copy of source-repo/docs/sidebars/sidebar-utils.ts always replaces any prior file.docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdx (1)
16-26: Clarify thatvalueis a property with getter/setter.Both entries are named
valuewithout distinguishing context. Consider adding brief descriptions to clarify that one is for getting the attribute value and one is for setting it, or document it as a single property with get/set behavior.Suggested improvement
#### `value` +Get the attribute's current value. + ```python value(self) -> Any
value+Set the attribute's value.
+value(self, value: Any) -> NoneAlternatively, document as a single property: ```markdown #### `value` Property to get or set the attribute's current value. - **Getter:** `value(self) -> Any` - **Setter:** `value(self, value: Any) -> None`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdx` around lines 16 - 26, The two identical `value` entries are ambiguous; update the documentation for the `value` property to clearly indicate get/set behavior by either combining them into a single "value" property section that states "Property to get or set the attribute's current value" and lists the getter signature `value(self) -> Any` and setter signature `value(self, value: Any) -> None`, or keep separate headings but rename them to "value (getter)" and "value (setter)" and add a one-line description for each; ensure you reference the `value(self) -> Any` and `value(self, value: Any) -> None` signatures in the chosen format.tests/unit/sdk/pool/test_pool_queries.py (1)
75-117: Consider extracting shared pool-node setup to reduce duplication.Both tests repeat near-identical
InfrahubNode/InfrahubNodeSyncconstruction and assertions. A small helper fixture/factory would make future changes safer and shorter.Also applies to: 150-185
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/sdk/pool/test_pool_queries.py` around lines 75 - 117, The test duplicates InfrahubNode/InfrahubNodeSync setup and identical assertions; extract a small helper or fixture (e.g., make_pool_with_prefix or pool_factory) that accepts client and sync flag and returns (ip_prefix, ip_pool) for use in both branches, then call get_pool_allocated_resources on the returned ip_pool and assert the two resource IDs; update both synchronous and async branches to use this factory to remove the repeated InfrahubNode/InfrahubNodeSync construction and duplicated assertion lists.docs/docs_generation/content_gen_methods/command_output_method.py (1)
40-51: Consider quoting the temp file path for robustness.If
tmp_pathcontains spaces or special characters, the command would fail. While system temp directories typically don't have spaces, quoting the path would make this more robust.♻️ Suggested improvement
- full_cmd = f"{self.command.build()} --output {tmp_path}" + full_cmd = f"{self.command.build()} --output '{tmp_path}'"Alternatively, use
shlex.quote(str(tmp_path))for proper shell escaping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs_generation/content_gen_methods/command_output_method.py` around lines 40 - 51, In apply(), the constructed shell command uses tmp_path unescaped which fails if the temp path contains spaces/special chars; update the full_cmd construction (where full_cmd = f"{self.command.build()} --output {tmp_path}") to safely escape or quote the path (e.g., use shlex.quote(str(tmp_path)) or wrap the path in quotes) so the subprocess call via self.context.run receives a properly-escaped --output argument; ensure you import shlex if you use shlex.quote and update the reference to tmp_path accordingly.tests/unit/doc_generation/mdx/test_mdx_ordered_section.py (1)
12-95: Add onesub_prioritiescase.This module only pins top-level
priority.namesordering. The branch that wraps child sections in a nestedOrderedMdxSectionis still untested here, so recursive ordering can regress without tripping this file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/doc_generation/mdx/test_mdx_ordered_section.py` around lines 12 - 95, Add a test exercising nested/reursive ordering by creating an outer OrderedMdxSection via _make_ordered with children that include at least two child sections (each with their own sub-headers/body), and pass a SectionPriority that supplies sub_priorities mapping one child name to a SectionPriority(names=[...]) to enforce ordering inside that child; assert the resulting ordered.content (or .lines) shows the outer children ordered per priority.names and the targeted child's inner headers ordered per the sub_priority. Use symbols OrderedMdxSection, _make_ordered, SectionPriority, child_heading_level and name strings for locating where to add the new test (suggest name it test_sub_priorities_orders_children_recursively).tests/unit/sdk/pool/test_attribute_from_pool.py (1)
70-84: Parameterize the node/client matrix so both implementations hit the same assertions.
_generate_mutation_query()and node-backed pool serialization are only asserted onInfrahubNodehere; the sync half currently stops at_generate_input_data(). A shared parametrized matrix would remove the duplication and keep async/sync coverage aligned.Also applies to: 92-130, 140-204
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/sdk/pool/test_attribute_from_pool.py` around lines 70 - 84, The test only exercises the async path (_generate_mutation_query on InfrahubNode) so add a pytest parametrized matrix that runs the same assertions for both implementations: parametrize over (client_fixture, node_factory_or_class) and for each case call the appropriate serializer method (use _generate_mutation_query for the async/node-backed implementation and _generate_input_data for the sync/client-backed implementation) by resolving the method name dynamically (e.g., method = getattr(node, method_name)) so both branches produce the same shape for vlan_id ({"value": None}); update the other related tests (the ranges noted) to use the same parametrization pattern so async and sync coverage stays aligned and duplicate code is removed.docs/docs_generation/content_gen_methods/jinja2_method.py (1)
38-39: Consider making this an async method or providing an async variant.
apply()currently blocks by callingasyncio.run()on async rendering. While docs generation is not presently invoked from async contexts, this pattern is fragile and violates the guideline against mixing async/sync inappropriately. Either keep this synchronous adapter clearly scoped to synchronous-only contexts, or expose an async path alongside it for future-proofing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs_generation/content_gen_methods/jinja2_method.py` around lines 38 - 39, The apply method currently blocks by calling asyncio.run on an async render (self.template.render); change the API to avoid mixing sync/async by making an async variant and updating callers: implement async def apply_async(self) -> str that awaits self.template.render(variables=self.template_variables), and either rename the existing apply to a synchronous thin wrapper that calls asyncio.run(apply_async()) only for strictly synchronous use-cases or deprecate it; update references to use apply_async where called from async contexts and keep apply as an explicit sync-only adapter if needed.infrahub_sdk/graphql/multipart.py (1)
57-99: Consider makingbuild_payload()variable-path configurable.
build_file_map()supports custom keys/paths, butbuild_payload()currently locks behavior tovariables.fileand key"0". Exposing those as parameters would make this builder reusable across more upload mutation shapes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrahub_sdk/graphql/multipart.py` around lines 57 - 99, Change build_payload to accept configurable variable path and file key (e.g., add parameters like variable_path: str = "file" and file_key: str = "0"); instead of hardcoding variables = {**variables, "file": None} set the variable at the given variable_path (supporting nested paths like "input.file") to None, call MultipartBuilder.build_file_map(...) using the chosen file_key and the path used, and add the file entry to files using files[file_key] = (file_name, file_content) when file_content is provided; update callers of build_payload to pass custom variable_path/file_key as needed and keep defaults to preserve current behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Line 59: Update the checklist entry that currently only mentions "Run `uv run
invoke docs-generate`" to include the follow-up validation command; change the
instruction so it reads that after creating/modifying/deleting CLI commands, SDK
config, or Python docstrings you must run both `uv run invoke docs-generate` and
`uv run invoke docs-validate` (or the suggested wording "Run `uv run invoke
docs-validate` after making code changes to verify documentation is current") so
the documented workflow includes the required validation step.
In `@CHANGELOG.md`:
- Line 18: The changelog bullet referencing FileObject (methods
upload_from_path, upload_from_bytes, download_file) has a broken issue link
using "/issues/ihs193"; update the markdown to point to the correct GitHub issue
URL (replace `/issues/ihs193` with the numeric issue path, e.g. `/issues/193`,
or the proper full issue URL) so the link resolves correctly.
In `@dev/commands/feedback.md`:
- Around line 89-92: The final unordered list (lines containing "Edit existing
files when updating documentation", "Create new files only when no appropriate
existing file exists", "Update `MEMORY.md` with approved memory changes", "Keep
all changes minimal and focused — don't over-document") ends at EOF without a
trailing blank line; add a single newline character after the last list item so
the file ends with a blank line and ensure Markdown lists have blank lines
before and after (apply the same fix in the feedback.md final list).
In `@docs/AGENTS.md`:
- Around line 32-35: Insert a single blank line after the sidebar bullet list
(the two bullets that start with "**infrahubctl**" and "**python-sdk**") so the
following paragraph ("No manual sidebar update is needed when adding a new
`.mdx` file...") is separated from the list; ensure there is one empty line
before and after the list per the Markdown guideline.
In
`@docs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.py`:
- Around line 94-113: _split_params currently splits on top-level commas but
doesn't ignore commas inside quoted strings nor treat Python positional-only '/'
and keyword-only '*' markers as non-parameters; update _split_params to track
quote state and escapes (handle both single and double quotes and backslash
escapes) so commas inside quotes are not treated as separators, continue using
depth for bracketed groups, and after splitting filter out tokens that are
exactly "/" or "*" (or strings composed only of whitespace plus "/" or "*") so
param_count()/from_overloads() won't count Python syntax markers as parameters;
reference function _split_params and usages like param_count()/from_overloads()
to verify correct selection.
In `@docs/docs_generation/content_gen_methods/mdx/mdx_priority.py`:
- Around line 41-52: SectionPriority allows duplicate entries which downstream
reorder only honors first occurrence; add validation in
SectionPriority.__post_init__ to detect duplicates and raise a ValueError when
found: check duplicates in the top-level names list and for each entry in
sub_priorities ensure its list has no repeated names (and optionally ensure keys
in sub_priorities are unique by construction); reference the dataclass
SectionPriority and its attributes names and sub_priorities when implementing
the check so malformed nested priority configs fail fast instead of silently
being ignored.
In `@docs/docs_generation/content_gen_methods/mdx/mdx_section.py`:
- Around line 119-145: The _parse_sections function currently treats heading
lines inside fenced code blocks as real headings; fix it by detecting
fenced-code delimiters (use a regex to set/clear a fence_delimiter variable when
encountering a fence match) and, while fence_delimiter is set, skip heading
detection (i.e., set level = None) and append lines to the current
MdxSection._lines or preamble as appropriate; ensure you use
fence_match/fence_delimiter logic before calling _heading_level(line), and
consistently append to MdxSection._lines (not .lines) when adding lines to a
section; keep _extract_heading_name and existing section-append logic unchanged.
In `@docs/docs_generation/helpers.py`:
- Around line 22-23: The loop is using the private API
env_settings._extract_field_info; replace it by deriving env var names from the
stable public config: read env_settings.model_config.get("env_prefix", "") and
model_config.get("case_sensitive", False), then for each field use the
field.name and any field.field_info.alias / field.field_info.validation_alias /
field.field_info.alias_choices (AliasChoices) to produce candidate names,
normalize case (upper() when not case_sensitive) and prepend the env_prefix to
form the final env var names, and append those to env_vars[field_key] instead of
calling env_settings._extract_field_info.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/client.mdx`:
- Around line 785-787: The parameter descriptions for timeout, tracker, and
raise_for_error are copy-pasted and incorrect; update the docs so that `timeout`
describes the timeout in seconds (how long to wait before aborting the request),
`tracker` describes the tracking identifier used to correlate requests or state
(not pagination offset), and `raise_for_error` describes the error-handling
behavior (whether to raise exceptions on errors or return error objects); make
these fixes for both places where the three params appear and then regenerate
the SDK docs using the command `uv run invoke generate-sdk`.
- Line 844: The docs line uses the wrong parameter name `size`; update the SDK
reference to match the method signature by replacing `size` with `prefix_length`
(the actual parameter used), and then regenerate the docs using the provided
command (`uv run invoke generate-sdk`) so the rendered docs reflect the
corrected parameter name; ensure any nearby references or examples also use
`prefix_length` to keep the documentation consistent with the function
signature.
- Around line 209-212: The docs incorrectly list the return type for
InfrahubClient.filters as list[InfrahubNodeSync]; update the generated SDK docs
by regenerating rather than editing: run "uv run invoke generate-sdk" to
regenerate the Python SDK docs so the filters method on InfrahubClient (async)
shows the correct return type list[InfrahubNode] (replace InfrahubNodeSync with
InfrahubNode in the generated output for the filters method signature/Returns
section).
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/constants.mdx`:
- Line 8: The docs note in constants.mdx is currently plain italic text; replace
it with a Docusaurus callout by wrapping the message in an :::note block (i.e.,
change the "*This module is empty or contains only private/internal
implementations.*" line to use the Docusaurus callout syntax `:::note` ...
`:::`) so the module-status note renders as a proper callout in the docs.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/node.mdx`:
- Around line 46-50: The plain-paragraph caveats about CoreFileObject and saved
nodes are easy to miss; update node.mdx so the constraints about "only available
for nodes that inherit from CoreFileObject" and "node must have been saved (have
an id) before calling this method" are rendered as callouts (e.g., :::warning or
:::note) rather than regular text; edit the section around the "Download the
file content from this FileObject node." description to wrap those two sentences
in appropriate callout blocks (and if this file is generated, update the
template that emits these lines so future generated pages use :::note/:::warning
for CoreFileObject and saved-node caveats) referencing CoreFileObject and
FileObject in the callout content.
In `@tests/AGENTS.md`:
- Around line 20-25: AGENTS.md is missing required per-agent metadata sections;
update the file to include, for each agent listed, a clear scaffold with
headings "Agent Name", "Purpose", "Responsibilities", "Inputs", "Outputs", and
"Error Handling" (use consistent markdown H2/H3 headings for each agent), fill
each section with a one-sentence summary for Purpose and Responsibilities, list
expected input fields and types under Inputs, list produced outputs under
Outputs, and describe failure modes and retry/alert behavior under Error
Handling so every agent entry (e.g., any agent references in the existing tree
like ctl/ or sdk/ agents) has these six required sections.
In `@tests/unit/doc_generation/content_gen_methods/test_file_printing_method.py`:
- Around line 29-30: Replace the loose falsy check on the result with a strict
equality check to the exact empty value returned by the method under test:
change "assert not result" to "assert result == ''" (or "assert result == []" /
the exact empty type that apply() should return) so the test fails if apply()
returns None or any other non-empty falsy value; locate the assertion that
follows the call to apply() in the file_printing test and update it accordingly.
In `@tests/unit/sdk/pool/test_allocate.py`:
- Around line 118-120: The assertions always cast ip_address to InfrahubNodeSync
which hides type mismatches for the async branch and forces ignores; update the
test to branch on the client type returned by allocate_next_ip_address /
allocate_next_ip_prefix: when the call returns CoreNode assert by casting
ip_address to the async type (CoreNode / InfrahubNode as appropriate) and when
it returns CoreNodeSync cast to InfrahubNodeSync, then check .address.value and
.description.value accordingly so each branch uses the correct type (refer to
allocate_next_ip_address, allocate_next_ip_prefix, CoreNode, CoreNodeSync,
InfrahubNodeSync and the ip_address variable).
In `@tests/unit/sdk/test_node.py`:
- Around line 3153-3179: The test test_node_get_file_for_upload_path is using
NamedTemporaryFile and then reopening the same path in
node._get_file_for_upload()/InfrahubNode.upload_from_path which fails on
Windows; modify the test to use a TemporaryDirectory and create the file with
Path.write_bytes() (or explicitly close the NamedTemporaryFile before calling
upload_from_path) so the file can be reopened safely; ensure assertions still
check prepared.filename, prepared.should_close, prepared.file_object contents
and that you call prepared.file_object.close() at the end.
---
Outside diff comments:
In `@docs/AGENTS.md`:
- Around line 1-37: The AGENTS.md file is missing the required agent metadata
sections; update docs/AGENTS.md (the AGENTS documentation page) to include
clearly labeled headings for Agent Name, Purpose, Responsibilities, Inputs,
Outputs, and Error Handling, and populate each section with a short descriptive
paragraph or bullet list describing the agent (who/what it is), its primary goal
(Purpose), what it must do or own (Responsibilities), expected inputs and their
formats (Inputs), produced outputs and formats (Outputs), and how failures are
detected and handled including retry/fallback and logging behavior (Error
Handling); use consistent heading names (e.g., "Agent Name", "Purpose",
"Responsibilities", "Inputs", "Outputs", "Error Handling") so the
sidebar-utils/consumers can find and display the metadata.
In `@infrahub_sdk/pytest_plugin/plugin.py`:
- Around line 80-85: The current branch selection uses hasattr against
session.config.option which is always true for registered pytest options, making
the username/password path unreachable; change the checks in the block that sets
client_config to test the actual values instead of presence (e.g., use
getattr(session.config.option, "infrahub_key", None) or truthiness of
session.config.option.infrahub_key) and only set client_config["api_token"] when
a non-empty token is provided, otherwise set client_config["username"] and
client_config["password"] when both credentials are non-empty; update the logic
around client_config.pop("api_token") so it only removes the token when
switching to credential auth.
---
Nitpick comments:
In @.github/workflows/sync-docs.yml:
- Around line 31-39: The Sync folders step currently deletes old
sidebars-python-sdk.ts and sidebars-infrahubctl.ts but does not remove an
existing sidebar-utils.ts before copying, which can leave stale content; update
the Sync folders step to remove the existing target-repo/docs/sidebar-utils.ts
(e.g., add an rm -f target-repo/docs/sidebar-utils.ts line before copying) so
that the copy of source-repo/docs/sidebars/sidebar-utils.ts always replaces any
prior file.
In @.vale/styles/GeneratedRef/spelling.yml:
- Line 6: The regex '[pP]y.*\b' in the spelling rule is too permissive and
matches substrings inside words (e.g., "copy" or "happy path"); update the
pattern used in .vale/styles/GeneratedRef/spelling.yml (the '[pP]y.*\b' entry)
to anchor with word boundaries on both sides—for example use a pattern that
matches whole tokens like '\b[pP]y\w*\b' so only standalone tokens beginning
with "Py"/"py" are filtered.
In `@docs/docs_generation/content_gen_methods/command_output_method.py`:
- Around line 40-51: In apply(), the constructed shell command uses tmp_path
unescaped which fails if the temp path contains spaces/special chars; update the
full_cmd construction (where full_cmd = f"{self.command.build()} --output
{tmp_path}") to safely escape or quote the path (e.g., use
shlex.quote(str(tmp_path)) or wrap the path in quotes) so the subprocess call
via self.context.run receives a properly-escaped --output argument; ensure you
import shlex if you use shlex.quote and update the reference to tmp_path
accordingly.
In `@docs/docs_generation/content_gen_methods/jinja2_method.py`:
- Around line 38-39: The apply method currently blocks by calling asyncio.run on
an async render (self.template.render); change the API to avoid mixing
sync/async by making an async variant and updating callers: implement async def
apply_async(self) -> str that awaits
self.template.render(variables=self.template_variables), and either rename the
existing apply to a synchronous thin wrapper that calls
asyncio.run(apply_async()) only for strictly synchronous use-cases or deprecate
it; update references to use apply_async where called from async contexts and
keep apply as an explicit sync-only adapter if needed.
In `@docs/docs_generation/content_gen_methods/mdx/mdx_code_doc.py`:
- Around line 42-48: The current doctest-handling branch closes the fenced block
when encountering lines starting with "#" which incorrectly treats comment lines
in expected output as fence terminators; in the doctest handling code (the
branch using the in_doctest variable and the result.append calls) change the
condition so it no longer treats lines starting with "#" as a terminator —
instead close the fence only on truly blank lines or when encountering a new
doctest prompt (e.g., a line matching ^\s*(>>>|\.\.\.) ), so update the
condition that currently checks line.startswith("#") to check for blank OR a
doctest prompt and remove the check for leading "#" to avoid prematurely closing
the fence.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdx`:
- Around line 16-26: The two identical `value` entries are ambiguous; update the
documentation for the `value` property to clearly indicate get/set behavior by
either combining them into a single "value" property section that states
"Property to get or set the attribute's current value" and lists the getter
signature `value(self) -> Any` and setter signature `value(self, value: Any) ->
None`, or keep separate headings but rename them to "value (getter)" and "value
(setter)" and add a one-line description for each; ensure you reference the
`value(self) -> Any` and `value(self, value: Any) -> None` signatures in the
chosen format.
In `@docs/docs/python-sdk/topics/object_file.mdx`:
- Around line 286-305: Wrap the "Error: mismatched range lengths" section in a
warning callout (use :::warning ... :::) so the example YAML and the resulting
error message ("Range expansion mismatch: fields expanded to different lengths:
[3, 6]") are presented as a highlighted warning; update the block that starts
with the "#### Error: mismatched range lengths" heading to be enclosed in the
warning callout and ensure the YAML sample and the bash error snippet remain
inside the callout for visibility.
In `@docs/sidebars/sidebar-utils.ts`:
- Line 4: The current map callback uses file.replace('.mdx', '') which strips
the first occurrence anywhere; change the replace call to use a regex anchored
to the end (e.g., /\.mdx$/) so only the trailing .mdx extension is removed.
Update both occurrences where .map(file => file.replace('.mdx', '')) appears
(the map callback) to call replace with the end-anchored regex.
In `@infrahub_sdk/graphql/multipart.py`:
- Around line 57-99: Change build_payload to accept configurable variable path
and file key (e.g., add parameters like variable_path: str = "file" and
file_key: str = "0"); instead of hardcoding variables = {**variables, "file":
None} set the variable at the given variable_path (supporting nested paths like
"input.file") to None, call MultipartBuilder.build_file_map(...) using the
chosen file_key and the path used, and add the file entry to files using
files[file_key] = (file_name, file_content) when file_content is provided;
update callers of build_payload to pass custom variable_path/file_key as needed
and keep defaults to preserve current behavior.
In `@infrahub_sdk/node/attribute.py`:
- Around line 16-41: The _GraphQLPayloadAttribute currently subclasses
NamedTuple but mutates its payload in add_properties, which conflicts with
immutability expectations; convert _GraphQLPayloadAttribute into a dataclass
(from typing import Any, dataclasses.dataclass) keeping the same fields payload,
variables, needs_metadata and preserve to_dict and add_properties behavior so
mutation is explicit, or alternatively change add_properties to return a new
_GraphQLPayloadAttribute with combined payload/variables instead of mutating
self (update references to add_properties accordingly); locate the class and
methods _GraphQLPayloadAttribute, to_dict, and add_properties to implement the
chosen approach.
In `@tests/unit/doc_generation/mdx/test_mdx_ordered_section.py`:
- Around line 12-95: Add a test exercising nested/reursive ordering by creating
an outer OrderedMdxSection via _make_ordered with children that include at least
two child sections (each with their own sub-headers/body), and pass a
SectionPriority that supplies sub_priorities mapping one child name to a
SectionPriority(names=[...]) to enforce ordering inside that child; assert the
resulting ordered.content (or .lines) shows the outer children ordered per
priority.names and the targeted child's inner headers ordered per the
sub_priority. Use symbols OrderedMdxSection, _make_ordered, SectionPriority,
child_heading_level and name strings for locating where to add the new test
(suggest name it test_sub_priorities_orders_children_recursively).
In `@tests/unit/sdk/pool/test_attribute_from_pool.py`:
- Around line 70-84: The test only exercises the async path
(_generate_mutation_query on InfrahubNode) so add a pytest parametrized matrix
that runs the same assertions for both implementations: parametrize over
(client_fixture, node_factory_or_class) and for each case call the appropriate
serializer method (use _generate_mutation_query for the async/node-backed
implementation and _generate_input_data for the sync/client-backed
implementation) by resolving the method name dynamically (e.g., method =
getattr(node, method_name)) so both branches produce the same shape for vlan_id
({"value": None}); update the other related tests (the ranges noted) to use the
same parametrization pattern so async and sync coverage stays aligned and
duplicate code is removed.
In `@tests/unit/sdk/pool/test_pool_queries.py`:
- Around line 75-117: The test duplicates InfrahubNode/InfrahubNodeSync setup
and identical assertions; extract a small helper or fixture (e.g.,
make_pool_with_prefix or pool_factory) that accepts client and sync flag and
returns (ip_prefix, ip_pool) for use in both branches, then call
get_pool_allocated_resources on the returned ip_pool and assert the two resource
IDs; update both synchronous and async branches to use this factory to remove
the repeated InfrahubNode/InfrahubNodeSync construction and duplicated assertion
lists.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3b8c39e5-526c-4b0d-9ce6-c6437810de33
⛔ Files ignored due to path filters (2)
docs/package-lock.jsonis excluded by!**/package-lock.jsonuv.lockis excluded by!**/*.lock
📒 Files selected for processing (147)
.github/file-filters.yml.github/workflows/ci.yml.github/workflows/sync-docs.yml.vale.ini.vale/styles/GeneratedRef/spelling.yml.vale/styles/Infrahub/sentence-case.yml.vale/styles/Infrahub/spelling.yml.vale/styles/spelling-exceptions.txtAGENTS.mdCHANGELOG.mdchangelog/265.fixed.mddev/commands/feedback.mddev/commands/pre-ci.mddocs/.markdownlint.yamldocs/AGENTS.mddocs/__init__.pydocs/_templates/sdk_config.j2docs/docs/python-sdk/guides/client.mdxdocs/docs/python-sdk/guides/python-typing.mdxdocs/docs/python-sdk/reference/config.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/client.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/constants.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/metadata.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/node.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/parsers.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/property.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/related_node.mdxdocs/docs/python-sdk/sdk_ref/infrahub_sdk/node/relationship.mdxdocs/docs/python-sdk/topics/object_file.mdxdocs/docs_generation/__init__.pydocs/docs_generation/content_gen_methods/__init__.pydocs/docs_generation/content_gen_methods/base.pydocs/docs_generation/content_gen_methods/command/__init__.pydocs/docs_generation/content_gen_methods/command/command.pydocs/docs_generation/content_gen_methods/command/typer_command.pydocs/docs_generation/content_gen_methods/command_output_method.pydocs/docs_generation/content_gen_methods/file_printing_method.pydocs/docs_generation/content_gen_methods/jinja2_method.pydocs/docs_generation/content_gen_methods/mdx/__init__.pydocs/docs_generation/content_gen_methods/mdx/mdx_code_doc.pydocs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_code_doc.pydocs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.pydocs/docs_generation/content_gen_methods/mdx/mdx_ordered_code_doc.pydocs/docs_generation/content_gen_methods/mdx/mdx_ordered_section.pydocs/docs_generation/content_gen_methods/mdx/mdx_priority.pydocs/docs_generation/content_gen_methods/mdx/mdx_section.pydocs/docs_generation/helpers.pydocs/docs_generation/pages/__init__.pydocs/docs_generation/pages/base.pydocs/docusaurus.config.tsdocs/package.jsondocs/sidebars-infrahubctl.tsdocs/sidebars-python-sdk.tsdocs/sidebars/sidebar-utils.test.tsdocs/sidebars/sidebar-utils.tsdocs/sidebars/sidebars-infrahubctl.tsdocs/sidebars/sidebars-python-sdk.tsdocs/src/theme/MDXComponents.jsdocs/vitest.config.tsinfrahub_sdk/branch.pyinfrahub_sdk/client.pyinfrahub_sdk/ctl/branch.pyinfrahub_sdk/ctl/cli_commands.pyinfrahub_sdk/ctl/config.pyinfrahub_sdk/ctl/exporter.pyinfrahub_sdk/ctl/importer.pyinfrahub_sdk/ctl/utils.pyinfrahub_sdk/ctl/validate.pyinfrahub_sdk/exceptions.pyinfrahub_sdk/file_handler.pyinfrahub_sdk/graphql/__init__.pyinfrahub_sdk/graphql/constants.pyinfrahub_sdk/graphql/multipart.pyinfrahub_sdk/graphql/renderers.pyinfrahub_sdk/jinja2.pyinfrahub_sdk/node/attribute.pyinfrahub_sdk/node/constants.pyinfrahub_sdk/node/node.pyinfrahub_sdk/node/parsers.pyinfrahub_sdk/protocols.pyinfrahub_sdk/protocols_base.pyinfrahub_sdk/protocols_generator/generator.pyinfrahub_sdk/py.typedinfrahub_sdk/pytest_plugin/items/base.pyinfrahub_sdk/pytest_plugin/items/check.pyinfrahub_sdk/pytest_plugin/items/graphql_query.pyinfrahub_sdk/pytest_plugin/items/jinja2_transform.pyinfrahub_sdk/pytest_plugin/items/python_transform.pyinfrahub_sdk/pytest_plugin/loader.pyinfrahub_sdk/pytest_plugin/plugin.pyinfrahub_sdk/spec/object.pyinfrahub_sdk/template/__init__.pyinfrahub_sdk/testing/schemas/file_object.pyinfrahub_sdk/timestamp.pyinfrahub_sdk/topological_sort.pyinfrahub_sdk/utils.pypyproject.tomltasks.pytests/AGENTS.mdtests/integration/test_export_import.pytests/integration/test_file_object.pytests/integration/test_infrahub_client.pytests/integration/test_infrahub_client_sync.pytests/unit/ctl/test_branch_report.pytests/unit/ctl/test_render_app.pytests/unit/doc_generation/__init__.pytests/unit/doc_generation/content_gen_methods/__init__.pytests/unit/doc_generation/content_gen_methods/command/__init__.pytests/unit/doc_generation/content_gen_methods/command/test_typer_command.pytests/unit/doc_generation/content_gen_methods/mdx/__init__.pytests/unit/doc_generation/content_gen_methods/mdx/test_mdx_code_doc.pytests/unit/doc_generation/content_gen_methods/test_command_output_method.pytests/unit/doc_generation/content_gen_methods/test_file_printing_method.pytests/unit/doc_generation/content_gen_methods/test_jinja2_method.pytests/unit/doc_generation/mdx/__init__.pytests/unit/doc_generation/mdx/conftest.pytests/unit/doc_generation/mdx/test_mdx_collapsed_overload_code_doc.pytests/unit/doc_generation/mdx/test_mdx_collapsed_overload_section.pytests/unit/doc_generation/mdx/test_mdx_method_signature.pytests/unit/doc_generation/mdx/test_mdx_ordered_section.pytests/unit/doc_generation/mdx/test_reorder_classes.pytests/unit/doc_generation/mdx/test_reorder_methods.pytests/unit/doc_generation/mdx/test_reorder_sections.pytests/unit/doc_generation/mdx/test_validate_priorities.pytests/unit/doc_generation/pages/__init__.pytests/unit/doc_generation/pages/test_doc_page.pytests/unit/doc_generation/test_docs_validate.pytests/unit/doc_generation/test_helpers.pytests/unit/sdk/conftest.pytests/unit/sdk/graphql/test_multipart.pytests/unit/sdk/pool/__init__.pytests/unit/sdk/pool/conftest.pytests/unit/sdk/pool/test_allocate.pytests/unit/sdk/pool/test_attribute_from_pool.pytests/unit/sdk/pool/test_pool_queries.pytests/unit/sdk/pool/test_relationship_from_pool.pytests/unit/sdk/spec/test_object.pytests/unit/sdk/test_attribute_generate_input_data.pytests/unit/sdk/test_client.pytests/unit/sdk/test_file_handler.pytests/unit/sdk/test_file_object.pytests/unit/sdk/test_node.pytests/unit/sdk/test_query_analyzer.pytests/unit/sdk/test_schema.pytests/unit/sdk/test_timestamp.pytests/unit/test_tasks.py
💤 Files with no reviewable changes (7)
- docs/sidebars-infrahubctl.ts
- tests/integration/test_infrahub_client_sync.py
- docs/sidebars-python-sdk.ts
- docs/_templates/sdk_config.j2
- docs/docs/python-sdk/reference/config.mdx
- changelog/265.fixed.md
- tests/integration/test_infrahub_client.py
|
|
||
| - Run `uv run invoke format lint-code` before committing Python code | ||
| - Run `uv run invoke generate-sdk generate-infrahubctl` after changing CLI commands or SDK config | ||
| - Run `uv run invoke docs-generate` after creating, modifying or deleting CLI commands, SDK config, or Python docstrings |
There was a problem hiding this comment.
Keep docs-validate in the required workflow.
Line 59 now calls out docs-generate, but it drops the follow-up validation step the repo expects after code changes. Please mention uv run invoke docs-validate here as well so the “Always” checklist matches the documented docs workflow.
Suggested wording
-- Run `uv run invoke docs-generate` after creating, modifying or deleting CLI commands, SDK config, or Python docstrings
+- Run `uv run invoke docs-generate` after creating, modifying or deleting CLI commands, SDK config, or Python docstrings
+- Run `uv run invoke docs-validate` after making code changes to verify documentation is currentBased on learnings, Run uv run invoke docs-validate after making code changes to verify documentation is current.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Run `uv run invoke docs-generate` after creating, modifying or deleting CLI commands, SDK config, or Python docstrings | |
| - Run `uv run invoke docs-generate` after creating, modifying or deleting CLI commands, SDK config, or Python docstrings | |
| - Run `uv run invoke docs-validate` after making code changes to verify documentation is current |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` at line 59, Update the checklist entry that currently only
mentions "Run `uv run invoke docs-generate`" to include the follow-up validation
command; change the instruction so it reads that after
creating/modifying/deleting CLI commands, SDK config, or Python docstrings you
must run both `uv run invoke docs-generate` and `uv run invoke docs-validate`
(or the suggested wording "Run `uv run invoke docs-validate` after making code
changes to verify documentation is current") so the documented workflow includes
the required validation step.
|
|
||
| ### Added | ||
|
|
||
| - Added support for FileObject nodes with file upload and download capabilities. New methods `upload_from_path(path)` and `upload_from_bytes(content, name)` allow setting file content before saving, while `download_file(dest)` enables downloading files to memory or streaming to disk for large files. ([#ihs193](https://github.com/opsmill/infrahub-sdk-python/issues/ihs193)) |
There was a problem hiding this comment.
Fix the broken issue link in the FileObject changelog bullet.
The current GitHub issue URL uses a non-numeric path (/issues/ihs193) and is likely a dead link.
Suggested edit
-- Added support for FileObject nodes with file upload and download capabilities. New methods `upload_from_path(path)` and `upload_from_bytes(content, name)` allow setting file content before saving, while `download_file(dest)` enables downloading files to memory or streaming to disk for large files. ([`#ihs193`](https://github.com/opsmill/infrahub-sdk-python/issues/ihs193))
+- Added support for FileObject nodes with file upload and download capabilities. New methods `upload_from_path(path)` and `upload_from_bytes(content, name)` allow setting file content before saving, while `download_file(dest)` enables downloading files to memory or streaming to disk for large files. (Issue: ihs193)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Added support for FileObject nodes with file upload and download capabilities. New methods `upload_from_path(path)` and `upload_from_bytes(content, name)` allow setting file content before saving, while `download_file(dest)` enables downloading files to memory or streaming to disk for large files. ([#ihs193](https://github.com/opsmill/infrahub-sdk-python/issues/ihs193)) | |
| - Added support for FileObject nodes with file upload and download capabilities. New methods `upload_from_path(path)` and `upload_from_bytes(content, name)` allow setting file content before saving, while `download_file(dest)` enables downloading files to memory or streaming to disk for large files. (Issue: ihs193) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CHANGELOG.md` at line 18, The changelog bullet referencing FileObject
(methods upload_from_path, upload_from_bytes, download_file) has a broken issue
link using "/issues/ihs193"; update the markdown to point to the correct GitHub
issue URL (replace `/issues/ihs193` with the numeric issue path, e.g.
`/issues/193`, or the proper full issue URL) so the link resolves correctly.
| - Edit existing files when updating documentation | ||
| - Create new files only when no appropriate existing file exists | ||
| - Update `MEMORY.md` with approved memory changes | ||
| - Keep all changes minimal and focused — don't over-document |
There was a problem hiding this comment.
Add a trailing blank line after the final list.
The last list ends at EOF without a blank line after it, which breaks the repository markdown spacing rule.
As per coding guidelines: "Add blank lines before and after lists in Markdown files".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dev/commands/feedback.md` around lines 89 - 92, The final unordered list
(lines containing "Edit existing files when updating documentation", "Create new
files only when no appropriate existing file exists", "Update `MEMORY.md` with
approved memory changes", "Keep all changes minimal and focused — don't
over-document") ends at EOF without a trailing blank line; add a single newline
character after the last list item so the file ends with a blank line and ensure
Markdown lists have blank lines before and after (apply the same fix in the
feedback.md final list).
| - **infrahubctl**: all `.mdx` files are discovered automatically and sorted alphabetically. | ||
| - **python-sdk**: guides, topics, and reference sections preserve a defined display order; new files are appended alphabetically at the end. | ||
|
|
||
| No manual sidebar update is needed when adding a new `.mdx` file. However, to control the display order of a new page, add its doc ID to the ordered list in the corresponding `sidebars-*.ts` file. |
There was a problem hiding this comment.
Insert a blank line after the sidebar bullet list.
The paragraph starting at Line 35 should be separated from the list by a blank line.
✏️ Suggested fix
- **infrahubctl**: all `.mdx` files are discovered automatically and sorted alphabetically.
- **python-sdk**: guides, topics, and reference sections preserve a defined display order; new files are appended alphabetically at the end.
+
No manual sidebar update is needed when adding a new `.mdx` file. However, to control the display order of a new page, add its doc ID to the ordered list in the corresponding `sidebars-*.ts` file.As per coding guidelines "Add blank lines before and after lists in Markdown files".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/AGENTS.md` around lines 32 - 35, Insert a single blank line after the
sidebar bullet list (the two bullets that start with "**infrahubctl**" and
"**python-sdk**") so the following paragraph ("No manual sidebar update is
needed when adding a new `.mdx` file...") is separated from the list; ensure
there is one empty line before and after the list per the Markdown guideline.
| def _split_params(text: str) -> list[str]: | ||
| """Split *text* on commas that are not inside brackets.""" | ||
| depth = 0 | ||
| tokens: list[str] = [] | ||
| current: list[str] = [] | ||
| for char in text: | ||
| if char in {"[", "(", "{"}: | ||
| depth += 1 | ||
| current.append(char) | ||
| elif char in {"]", ")", "}"}: | ||
| depth -= 1 | ||
| current.append(char) | ||
| elif char == "," and depth == 0: | ||
| tokens.append("".join(current)) | ||
| current = [] | ||
| else: | ||
| current.append(char) | ||
| if current: | ||
| tokens.append("".join(current)) | ||
| return tokens |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "mdx_collapsed_overload_section.py" -x wc -l {}Repository: opsmill/infrahub-sdk-python
Length of output: 230
🏁 Script executed:
cat -n docs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.pyRepository: opsmill/infrahub-sdk-python
Length of output: 6014
🏁 Script executed:
# Check lines 45-57 (from_overloads method)
sed -n '45,57p' docs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.pyRepository: opsmill/infrahub-sdk-python
Length of output: 681
🏁 Script executed:
# Check lines 79-91 (_extract_text method)
sed -n '79,91p' docs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.pyRepository: opsmill/infrahub-sdk-python
Length of output: 544
🏁 Script executed:
# Check lines 124-149 (MethodSignature class and methods)
sed -n '124,149p' docs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.pyRepository: opsmill/infrahub-sdk-python
Length of output: 1102
🏁 Script executed:
cat -n tests/unit/doc_generation/mdx/test_mdx_collapsed_overload_section.pyRepository: opsmill/infrahub-sdk-python
Length of output: 6180
🏁 Script executed:
# Also search for test cases involving param_count or signatures with /, *, or quoted defaults
rg "param_count|def.*\(.*/" tests/unit/doc_generation/mdx/test_mdx_collapsed_overload_section.py -A 3 -B 1Repository: opsmill/infrahub-sdk-python
Length of output: 53
🏁 Script executed:
# Find conftest.py
fd -t f "conftest.py" tests/unit/doc_generation/mdx/Repository: opsmill/infrahub-sdk-python
Length of output: 112
🏁 Script executed:
# Check conftest to understand make_method_section
cat tests/unit/doc_generation/mdx/conftest.pyRepository: opsmill/infrahub-sdk-python
Length of output: 4447
🏁 Script executed:
# Trace through a problematic example to verify the claim
python3 << 'EOF'
# Simulate the code path for a signature with positional-only and keyword-only params
def _split_params(text: str) -> list[str]:
"""Split *text* on commas that are not inside brackets."""
depth = 0
tokens: list[str] = []
current: list[str] = []
for char in text:
if char in {"[", "(", "{"}:
depth += 1
current.append(char)
elif char in {"]", ")", "}"}:
depth -= 1
current.append(char)
elif char == "," and depth == 0:
tokens.append("".join(current))
current = []
else:
current.append(char)
if current:
tokens.append("".join(current))
return tokens
# Test case 1: Positional-only and keyword-only markers
params_text_1 = "self, /, *, sep=\",\""
tokens_1 = _split_params(params_text_1)
filtered_1 = [t for t in tokens_1 if t.strip() and t.strip() != "self"]
print(f"Test 1: {params_text_1}")
print(f" Tokens: {tokens_1}")
print(f" Filtered (non-self): {filtered_1}")
print(f" Count: {len(filtered_1)} (expected: 1, actual is wrong)")
print()
# Test case 2: Quoted string with comma
params_text_2 = "sep=\",\", value: int"
tokens_2 = _split_params(params_text_2)
filtered_2 = [t for t in tokens_2 if t.strip() and t.strip() != "self"]
print(f"Test 2: {params_text_2}")
print(f" Tokens: {tokens_2}")
print(f" Filtered (non-self): {filtered_2}")
print(f" Count: {len(filtered_2)} (expected: 2, but got {len(filtered_2)})")
print()
# Test case 3: Normal signature (should work fine)
params_text_3 = "self, a: int, b: str"
tokens_3 = _split_params(params_text_3)
filtered_3 = [t for t in tokens_3 if t.strip() and t.strip() != "self"]
print(f"Test 3: {params_text_3}")
print(f" Tokens: {tokens_3}")
print(f" Filtered (non-self): {filtered_3}")
print(f" Count: {len(filtered_3)} (expected: 2, correct)")
EOFRepository: opsmill/infrahub-sdk-python
Length of output: 546
The parameter counter fails to handle Python syntax markers and quoted defaults.
The _split_params() function treats all top-level commas as parameter separators, but doesn't account for quoted strings or syntax markers. Valid signatures like def f(self, /, *, sep=",") will be miscounted—/ and * are treated as parameters, and the comma inside "," breaks the quoted value into extra tokens. Since from_overloads() selects the overload with the highest param_count(), an incorrect count can cause the wrong overload to be chosen as primary.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/docs_generation/content_gen_methods/mdx/mdx_collapsed_overload_section.py`
around lines 94 - 113, _split_params currently splits on top-level commas but
doesn't ignore commas inside quoted strings nor treat Python positional-only '/'
and keyword-only '*' markers as non-parameters; update _split_params to track
quote state and escapes (handle both single and double quotes and backslash
escapes) so commas inside quotes are not treated as separators, continue using
depth for bracketed groups, and after splitting filter out tokens that are
exactly "/" or "*" (or strings composed only of whitespace plus "/" or "*") so
param_count()/from_overloads() won't count Python syntax markers as parameters;
reference function _split_params and usages like param_count()/from_overloads()
to verify correct selection.
| Download the file content from this FileObject node. | ||
|
|
||
| This method is only available for nodes that inherit from CoreFileObject. | ||
| The node must have been saved (have an id) before calling this method. | ||
|
|
There was a problem hiding this comment.
Render these constraints as callouts.
The CoreFileObject/saved-node caveats are important usage constraints, but they're easy to miss as plain paragraphs in a reference page. If this file is generated, please fix the template so they come out as :::note / :::warning blocks.
As per coding guidelines, "Use callouts (:::warning, :::note, etc.) for important notes and information in documentation".
Also applies to: 227-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/node.mdx` around lines 46 -
50, The plain-paragraph caveats about CoreFileObject and saved nodes are easy to
miss; update node.mdx so the constraints about "only available for nodes that
inherit from CoreFileObject" and "node must have been saved (have an id) before
calling this method" are rendered as callouts (e.g., :::warning or :::note)
rather than regular text; edit the section around the "Download the file content
from this FileObject node." description to wrap those two sentences in
appropriate callout blocks (and if this file is generated, update the template
that emits these lines so future generated pages use :::note/:::warning for
CoreFileObject and saved-node caveats) referencing CoreFileObject and FileObject
in the callout content.
| │ ├── ctl/ # CLI command tests | ||
| │ └── sdk/ # SDK tests | ||
| │ ├── pool/ # Resource pool allocation tests | ||
| │ ├── spec/ # Object spec tests | ||
| │ ├── checks/ # InfrahubCheck tests | ||
| │ └── ... # Core SDK tests (client, node, schema, etc.) |
There was a problem hiding this comment.
Add required AGENTS sections in this file.
The structure update is helpful, but this AGENTS.md still lacks the required agent metadata sections (agent name, purpose, responsibilities, inputs, outputs, and error handling).
📄 Suggested minimal scaffold
## Commands
@@
## Boundaries
@@
- Make unit tests depend on external services
+
+## Agent
+
+- **Name:** Tests Agent
+- **Purpose:** Define and enforce testing standards for this repository.
+- **Responsibilities:** Maintain unit/integration boundaries, mocking conventions, and execution guidance.
+- **Inputs:** Test files, fixtures, helper utilities, CI test configuration.
+- **Outputs:** Reliable, repeatable test suites and contributor guidance.
+- **Error handling:** Define expected failure modes, triage steps, and recovery guidance for flaky/broken tests.As per coding guidelines, Maintain comprehensive agent documentation including agent name, purpose, responsibilities, inputs, outputs, and error handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/AGENTS.md` around lines 20 - 25, AGENTS.md is missing required
per-agent metadata sections; update the file to include, for each agent listed,
a clear scaffold with headings "Agent Name", "Purpose", "Responsibilities",
"Inputs", "Outputs", and "Error Handling" (use consistent markdown H2/H3
headings for each agent), fill each section with a one-sentence summary for
Purpose and Responsibilities, list expected input fields and types under Inputs,
list produced outputs under Outputs, and describe failure modes and retry/alert
behavior under Error Handling so every agent entry (e.g., any agent references
in the existing tree like ctl/ or sdk/ agents) has these six required sections.
| # Assert | ||
| assert not result |
There was a problem hiding this comment.
Make the empty-content assertion strict.
Line 30 currently uses assert not result, which would also pass if apply() accidentally returns None. Assert the exact expected value instead.
Suggested fix
- assert not result
+ assert result == ""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Assert | |
| assert not result | |
| # Assert | |
| assert result == "" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/doc_generation/content_gen_methods/test_file_printing_method.py`
around lines 29 - 30, Replace the loose falsy check on the result with a strict
equality check to the exact empty value returned by the method under test:
change "assert not result" to "assert result == ''" (or "assert result == []" /
the exact empty type that apply() should return) so the test fails if apply()
returns None or any other non-empty falsy value; locate the assertion that
follows the call to apply() in the file_printing test and update it accordingly.
| assert str(cast("InfrahubNodeSync", ip_address).address.value) == "192.0.2.0/32" | ||
| assert cast("InfrahubNodeSync", ip_address).description.value == "test" | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify type-bypass hotspots in this test module
rg -n 'type:\s*ignore|cast\("InfrahubNodeSync"' tests/unit/sdk/pool/test_allocate.pyRepository: opsmill/infrahub-sdk-python
Length of output: 481
🏁 Script executed:
# Get the full test function and surrounding context
sed -n '100,130p' tests/unit/sdk/pool/test_allocate.py | cat -nRepository: opsmill/infrahub-sdk-python
Length of output: 1440
🏁 Script executed:
# Also check around line 219 for context
sed -n '200,230p' tests/unit/sdk/pool/test_allocate.py | cat -nRepository: opsmill/infrahub-sdk-python
Length of output: 1121
🏁 Script executed:
# Check the beginning of the file for imports and type definitions
head -50 tests/unit/sdk/pool/test_allocate.py | cat -nRepository: opsmill/infrahub-sdk-python
Length of output: 1857
🏁 Script executed:
# Look for the parametrize decorator and client_type usage
rg -B 5 -A 15 '@pytest.mark.parametrize.*client_type' tests/unit/sdk/pool/test_allocate.py | head -60Repository: opsmill/infrahub-sdk-python
Length of output: 1382
🏁 Script executed:
# Find the InfrahubNode and InfrahubNodeSync definitions in the codebase
fd -t f '\.py$' -x grep -l 'class InfrahubNode' {} \; 2>/dev/null | head -5Repository: opsmill/infrahub-sdk-python
Length of output: 98
🏁 Script executed:
# Get the complete test_allocate_next_ip_address function
sed -n '22,121p' tests/unit/sdk/pool/test_allocate.py | cat -nRepository: opsmill/infrahub-sdk-python
Length of output: 4406
🏁 Script executed:
# Get the complete test_allocate_next_ip_prefix function
sed -n '124,221p' tests/unit/sdk/pool/test_allocate.py | cat -nRepository: opsmill/infrahub-sdk-python
Length of output: 4387
🏁 Script executed:
# Check InfrahubNode and InfrahubNodeSync definitions to confirm they have the same interface
grep -A 20 "class InfrahubNode" infrahub_sdk/node/node.py | head -40Repository: opsmill/infrahub-sdk-python
Length of output: 1531
🏁 Script executed:
# Check InfrahubNodeSync class definition
grep -A 5 "class InfrahubNodeSync" infrahub_sdk/node/node.pyRepository: opsmill/infrahub-sdk-python
Length of output: 244
🏁 Script executed:
# Verify both classes have the same attribute access patterns
rg -A 10 "def address|def prefix|def description" infrahub_sdk/node/node.py | head -50Repository: opsmill/infrahub-sdk-python
Length of output: 53
🏁 Script executed:
# Check the inheritance hierarchy and whether both classes have the same __getattr__ pattern
grep -B 5 -A 30 "class InfrahubNodeBase" infrahub_sdk/node/node.py | head -50Repository: opsmill/infrahub-sdk-python
Length of output: 1657
🏁 Script executed:
# Look for __getattr__ method which might handle dynamic attribute access
rg -A 10 "def __getattr__" infrahub_sdk/node/node.pyRepository: opsmill/infrahub-sdk-python
Length of output: 1705
🏁 Script executed:
# Find the definition of allocate_next_ip_address and allocate_next_ip_prefix methods
rg -A 10 "def allocate_next_ip_address|def allocate_next_ip_prefix" --max-count=4 infrahub_sdk/Repository: opsmill/infrahub-sdk-python
Length of output: 2625
🏁 Script executed:
# Check what clients.standard and clients.sync actually return
rg -B 5 -A 15 "class.*Client.*:" infrahub_sdk/client.py | head -100Repository: opsmill/infrahub-sdk-python
Length of output: 2340
🏁 Script executed:
# Look at the BothClients fixture to understand the client types
rg -B 5 -A 20 "class BothClients|def.*clients.*BothClients" tests/unit/sdk/conftest.pyRepository: opsmill/infrahub-sdk-python
Length of output: 1584
🏁 Script executed:
# Check the return type signatures for allocate_next_ip_address in both clients
rg -B 3 "def allocate_next_ip_address" infrahub_sdk/client.py | grep -A 3 "def allocate_next_ip_address"Repository: opsmill/infrahub-sdk-python
Length of output: 1129
🏁 Script executed:
# Check the actual implementation of allocate_next_ip_address (not just stub)
rg -A 30 "def allocate_next_ip_address" infrahub_sdk/client.py | head -80Repository: opsmill/infrahub-sdk-python
Length of output: 2580
🏁 Script executed:
# Find the definitions of CoreNode and CoreNodeSync
fd -t f '\.py$' | xargs grep -l "^CoreNode = \|^class CoreNode\|Protocol.*CoreNode" | head -10Repository: opsmill/infrahub-sdk-python
Length of output: 127
🏁 Script executed:
# Check protocols_base.py for CoreNode and CoreNodeSync
find . -name "protocols_base.py" -o -name "*protocol*" | head -5 | xargs ls -la 2>/dev/nullRepository: opsmill/infrahub-sdk-python
Length of output: 827
🏁 Script executed:
# Look for CoreNode and CoreNodeSync type definitions more broadly
rg "CoreNode|CoreNodeSync" infrahub_sdk/protocols_base.py | head -20Repository: opsmill/infrahub-sdk-python
Length of output: 184
🏁 Script executed:
# Check if InfrahubNode and InfrahubNodeSync both inherit from InfrahubNodeBase
rg -A 2 "class InfrahubNode\(|class InfrahubNodeSync\(" infrahub_sdk/node/node.pyRepository: opsmill/infrahub-sdk-python
Length of output: 283
🏁 Script executed:
# Verify both have __getattr__ method with compatible return types
rg -B 2 "def __getattr__" infrahub_sdk/node/node.pyRepository: opsmill/infrahub-sdk-python
Length of output: 285
Add branch-specific assertions for type correctness.
The test is parametrized to run both standard (async) and sync client paths. The allocate_next_ip_address and allocate_next_ip_prefix calls return CoreNode for the standard branch and CoreNodeSync for the sync branch. Currently, both branches cast assertions to InfrahubNodeSync, which:
- Bypasses type checking for the standard branch without justification (violates the guideline)
- Requires explicit
# type: ignore[unresolved-attribute]comments on lines 219–220
Use conditional assertions that cast to the correct type for each branch.
💡 Proposed refactor
- assert ip_address
- assert str(cast("InfrahubNodeSync", ip_address).address.value) == "192.0.2.0/32"
- assert cast("InfrahubNodeSync", ip_address).description.value == "test"
+ assert ip_address
+ if client_type == "standard":
+ assert str(cast("InfrahubNode", ip_address).address.value) == "192.0.2.0/32"
+ assert cast("InfrahubNode", ip_address).description.value == "test"
+ else:
+ assert str(cast("InfrahubNodeSync", ip_address).address.value) == "192.0.2.0/32"
+ assert cast("InfrahubNodeSync", ip_address).description.value == "test"
@@
- assert ip_prefix
- assert str(cast("InfrahubNodeSync", ip_prefix).prefix.value) == "192.0.2.0/31" # type: ignore[unresolved-attribute]
- assert cast("InfrahubNodeSync", ip_prefix).description.value == "test" # type: ignore[unresolved-attribute]
+ assert ip_prefix
+ if client_type == "standard":
+ assert str(cast("InfrahubNode", ip_prefix).prefix.value) == "192.0.2.0/31"
+ assert cast("InfrahubNode", ip_prefix).description.value == "test"
+ else:
+ assert str(cast("InfrahubNodeSync", ip_prefix).prefix.value) == "192.0.2.0/31"
+ assert cast("InfrahubNodeSync", ip_prefix).description.value == "test"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/sdk/pool/test_allocate.py` around lines 118 - 120, The assertions
always cast ip_address to InfrahubNodeSync which hides type mismatches for the
async branch and forces ignores; update the test to branch on the client type
returned by allocate_next_ip_address / allocate_next_ip_prefix: when the call
returns CoreNode assert by casting ip_address to the async type (CoreNode /
InfrahubNode as appropriate) and when it returns CoreNodeSync cast to
InfrahubNodeSync, then check .address.value and .description.value accordingly
so each branch uses the correct type (refer to allocate_next_ip_address,
allocate_next_ip_prefix, CoreNode, CoreNodeSync, InfrahubNodeSync and the
ip_address variable).
| async def test_node_get_file_for_upload_path( | ||
| client_type: str, clients: BothClients, file_object_schema: NodeSchemaAPI | ||
| ) -> None: | ||
| """Test _get_file_for_upload with Path returns PreparedFile with opened file handle.""" | ||
| if client_type == "standard": | ||
| node = InfrahubNode(client=clients.standard, schema=file_object_schema, branch="main") | ||
| else: | ||
| node = InfrahubNodeSync(client=clients.sync, schema=file_object_schema, branch="main") | ||
|
|
||
| file_content = b"Content from path" | ||
| with tempfile.NamedTemporaryFile(suffix=".pdf") as tmp: | ||
| tmp.write(file_content) | ||
| tmp.flush() | ||
| tmp_path = Path(tmp.name) | ||
|
|
||
| node.upload_from_path(path=tmp_path) | ||
|
|
||
| if isinstance(node, InfrahubNode): | ||
| prepared = await node._get_file_for_upload() | ||
| else: | ||
| prepared = node._get_file_for_upload_sync() | ||
|
|
||
| assert prepared.file_object | ||
| assert prepared.filename == tmp_path.name | ||
| assert prepared.should_close # Path files should be closed after upload | ||
| assert prepared.file_object.read() == file_content | ||
| prepared.file_object.close() |
There was a problem hiding this comment.
This temp-file pattern is not portable.
_get_file_for_upload() reopens tmp_path while the NamedTemporaryFile handle is still open. That works on POSIX but usually fails on Windows with PermissionError. Prefer TemporaryDirectory() + Path.write_bytes() or close the temp file before reopening it.
💡 Portable temp-file pattern
- with tempfile.NamedTemporaryFile(suffix=".pdf") as tmp:
- tmp.write(file_content)
- tmp.flush()
- tmp_path = Path(tmp.name)
+ with tempfile.TemporaryDirectory() as tmpdir:
+ tmp_path = Path(tmpdir) / "upload.pdf"
+ tmp_path.write_bytes(file_content)
node.upload_from_path(path=tmp_path)
if isinstance(node, InfrahubNode):
prepared = await node._get_file_for_upload()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/sdk/test_node.py` around lines 3153 - 3179, The test
test_node_get_file_for_upload_path is using NamedTemporaryFile and then
reopening the same path in
node._get_file_for_upload()/InfrahubNode.upload_from_path which fails on
Windows; modify the test to use a TemporaryDirectory and create the file with
Path.write_bytes() (or explicitly close the NamedTemporaryFile before calling
upload_from_path) so the file can be reopened safely; ensure assertions still
check prepared.filename, prepared.should_close, prepared.file_object contents
and that you call prepared.file_object.close() at the end.
Codecov Report❌ Patch coverage is @@ Coverage Diff @@
## stable #870 +/- ##
==========================================
+ Coverage 80.33% 80.62% +0.29%
==========================================
Files 115 119 +4
Lines 9875 10335 +460
Branches 1504 1550 +46
==========================================
+ Hits 7933 8333 +400
- Misses 1421 1475 +54
- Partials 521 527 +6
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 3 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
pyproject.toml (1)
185-189: High violation count in test conftest warrants tracking.The 434
invalid-argument-typeviolations intests/unit/sdk/conftest.pyis substantial technical debt. While the per-file override approach with documented counts is appropriate for this release, consider creating a tracking issue to address these incrementally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 185 - 189, Add a tracking task for the 434 invalid-argument-type violations in tests/unit/sdk/conftest.py and reference it in the override comment: create a ticket in your issue tracker (e.g., "Lint: fix invalid-argument-type in tests/unit/sdk/conftest.py"), note the issue ID and target milestone, then update the pyproject.toml override comment near the [[tool.ty.overrides]] entry to include the issue ID, violation count, and planned remediation owner/ETA so the per-file ignore is clearly tracked and not forgotten.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pyproject.toml`:
- Around line 185-189: Add a tracking task for the 434 invalid-argument-type
violations in tests/unit/sdk/conftest.py and reference it in the override
comment: create a ticket in your issue tracker (e.g., "Lint: fix
invalid-argument-type in tests/unit/sdk/conftest.py"), note the issue ID and
target milestone, then update the pyproject.toml override comment near the
[[tool.ty.overrides]] entry to include the issue ID, violation count, and
planned remediation owner/ETA so the per-file ignore is clearly tracked and not
forgotten.
prepare release v1.19.0
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests