[NEM-363] Add built-context indexing flow#114
Conversation
JulienArzul
left a comment
There was a problem hiding this comment.
I think that'd be nice to use Pydantic to create the context type from the plugin rather than adding a new library (cattrs) that does the same thing
Looks good otherwise 🚀
| """Summary of an indexing run over built contexts.""" | ||
|
|
||
| total: int | ||
| indexed: int |
There was a problem hiding this comment.
Instead of int for each of these properties, should we return a list of DatasourceId?
We can still keep the indexed as a calculated property for quick access if we want
@dataclass
class IndexSummary:
"""Summary of an indexing run over built contexts."""
indexed: set[DatasourceId]
skipped: set[DatasourceId]
failed: set[DatasourceId]
@property
def number_indexed() -> int:
return len(indexed)
...
At the very least, I think we should be able to know which datasource failed
There was a problem hiding this comment.
That is already being logged in the exception catch.
| import logging | ||
| from datetime import datetime | ||
|
|
||
| import cattrs |
There was a problem hiding this comment.
We're already using Pydantic in the project, which in my understanding does the same thing. It would be better IMO to not bring in an other library and have two different ways of creating classes from YAML
There was a problem hiding this comment.
Interesting. I wasn't aware that Pydantic could do this work for non-pydantic models, and I wanted to avoid forcing users to use Pydantic. I will test without the new library and, if it fits, I'll remove the added dependency.
|
|
||
| converter = cattrs.Converter() | ||
| converter.register_structure_hook(datetime, lambda v, _: v) | ||
| build_datasource_context = converter.structure(raw_context, BuiltDatasourceContext) |
There was a problem hiding this comment.
❓ What is used for the context: Any inside this class? A dictionary?
We could potentially skip this step to avoid the weird object of build_datasource_context with the wrong content for "context".
That would mean reading the attributes from BuiltDatasourceContext directly in the raw dictionary:
typed_context = converter.structure(raw_context.get("context", {}), context_type)
There was a problem hiding this comment.
I'm not sure what you mean here. I'm doing this deliberately so that I can use the other items from the BuiltDatasourceContext object (datasource_type, datasource_id and the context itself)
| if Path(context_file_name).suffix not in DatasourceId.ALLOWED_YAML_SUFFIXES: | ||
| if ( | ||
| Path(context_file_name).suffix not in DatasourceId.ALLOWED_YAML_SUFFIXES | ||
| or context_file_name == "all_results.yaml" |
| if not chunk_embeddings: | ||
| raise ValueError("chunk_embeddings must be a non-empty list") | ||
|
|
||
| # Outside the transaction due to duckdb limitations. |
There was a problem hiding this comment.
That's annoying...
But I guess since we're in a local context, there shouldn't be any concurrency on the DB and we can probably live with it. The only potential problem would be is something fails within the following transaction: we deleted all previously existing contexts but we didn't add new ones, which is not great but is something that we can deal with as long as we notify the user that this datasource failed to be indexed
| try: | ||
| logger.info(f"Indexing datasource {context.datasource_id}") | ||
|
|
||
| datasource_type = read_datasource_type_from_context_file( |
There was a problem hiding this comment.
Nit: Since you're getting a DatasourceContext as input, you have already read the context as a string. So we don't really need to re-read from the file system anymore (which is what this function does)
We could either:
- replicate what that function does (finding the line with the type attribute and parsing that one only)
- or simply parse the full YAML string since you'll do it afterwards anyway
| datasource_ids: list[DatasourceId] | None = None, | ||
| chunk_embedding_mode: ChunkEmbeddingMode = ChunkEmbeddingMode.EMBEDDABLE_TEXT_ONLY, | ||
| ) -> IndexSummary: | ||
| """Index built datasource contexts into duckdb. |
There was a problem hiding this comment.
Nit: I'm not sure we should we mention DuckDB in externally facing docs?
| The summary of the index operation. | ||
| """ | ||
| engine: DatabaoContextEngine = self.get_engine_for_project() | ||
| contexts: list[DatasourceContext] = engine.get_all_contexts() |
There was a problem hiding this comment.
Improvement for an other PR: we probably should have an API in the engine to get only the datasources from a list
Right now, we only have:
- get one datasource context
- get all datasource context
We should add:
- get multiple datasource contexts
There was a problem hiding this comment.
Indeed, we could add that. I considered implementing it for this PR, but I'm not sure it actually saves too much IO, but it might, specially if we have contexts that are too big.
|
|
||
| if datasource_ids is not None: | ||
| wanted_paths = {d.datasource_path for d in datasource_ids} | ||
| contexts = [c for c in contexts if c.datasource_id.datasource_path in wanted_paths] |
There was a problem hiding this comment.
Wouldn't you be able to simply check if c.datasource_id in datasource_ids?
| c2 = DatasourceContext(DatasourceId.from_string_repr("other/b.yaml"), context="B") | ||
|
|
||
| engine = mocker.Mock() | ||
| engine.get_all_contexts.return_value = [c1, c2] |
There was a problem hiding this comment.
IMO it would be interesting to make this a full end-2-end test rather than testing the very small amount of code within the ProjectManager that's only filtering which datasource context to use
(I think all other tests in this class are end2end tests since this is the entry point)
There is already a helper function called given_output_dir_with_built_contexts that can create the contexts for you in the output folder so it shouldn't be hard code-wise
This PR introduces a full "index built contexts" workflow. After the datasource context is built, users can now run
indexto read the generated context files fromoutput/, generate embeddings and persist into DuckDB.Changes
dce indexcommandindexcommand that indexesbuiltcontext files into DuckDB.DatabaoContextProjectManager.index_built_contexts(...)BuildService.index_built_context(...)BuildDatasourceContext) and deserializes the context into the plugin's expectedcontext_typePersistenceService.write_chunks_and_embeddings(..., override=True)now deletes old embeddings and chunks for a datasource before inserting new ones.context_typeyaml.safe_load(), the payload is always in Python primitives, but the chunkers are intentionally written against typed context objects. Because of that, each plugin now declarescontext_typeto tell the indexing pipeline what type to reconstruct before calling the chunking operation.cattrsprovides structured conversion from unstructured data into python types, which fits our needs and avoids boilerplatedeserializemethods that may be tough to maintain as the project grows.