WIP: Save/load models for CAAFE -- Client#184
Conversation
Summary of ChangesHello @brendan-priorlabs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the client-side model management by introducing a 'model_id' for uniquely identifying trained models. This allows the client to explicitly reference and utilize specific models for prediction, moving beyond just dataset UIDs. The 'fit' and 'predict' interfaces have been updated to support this new identifier, improving the overall model lifecycle management within the client application. This is a work-in-progress to enable save/load functionality for CAAFE models. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces the concept of a model_id to uniquely identify fitted models, which is a significant enhancement for saving and loading models. The changes span across the client, estimator, and service wrapper to support this new identifier. My review has identified a few critical issues: a potential UnboundLocalError in the client, a logic error in the prediction retry mechanism, and a configuration change that points to a local server instead of production. I've also suggested improving test coverage for the new model_id functionality. Please address these points to ensure the stability and correctness of the implementation.
| ) | ||
| elif isinstance(event, FitCompleteEvent): | ||
| train_set_uid = event.train_set_uid | ||
| model_id = event.model_id |
There was a problem hiding this comment.
The variable model_id is assigned here within an elif block. However, it is not initialized before the loop (lines 474-511). If a FitCompleteEvent is not received from the server, the subsequent reference to model_id on line 519 will raise an UnboundLocalError. To prevent this, please initialize model_id to None before the try block, similar to how train_set_uid is initialized.
| # local testing | ||
| protocol: "http" | ||
| host: "localhost" | ||
| port: "8080" | ||
|
|
||
| ## production | ||
| #protocol: "https" | ||
| #host: "api.priorlabs.ai" | ||
| #port: "443" |
There was a problem hiding this comment.
This change switches the server configuration from production to a local testing environment. While this is useful for development, it should be reverted before merging to avoid pointing the client to a local server in a production release. Please ensure the production configuration is active in the final version of this pull request.
# # local testing
# protocol: "http"
# host: "localhost"
# port: "8080"
# production
protocol: "https"
host: "api.priorlabs.ai"
port: "443"| train_set_uid = fitted_model.train_set_uid | ||
| params["train_set_uid"] = train_set_uid |
There was a problem hiding this comment.
When retrying a prediction after a UID error, a new model is fitted, but only the train_set_uid is updated in the params for the next attempt. The new model_id from fitted_model is ignored. This could lead to using a stale model_id in the retried prediction request. You should also update the model_id in the params dictionary.
| train_set_uid = fitted_model.train_set_uid | |
| params["train_set_uid"] = train_set_uid | |
| train_set_uid = fitted_model.train_set_uid | |
| params["train_set_uid"] = train_set_uid | |
| params["model_id"] = fitted_model.model_id |
| train_set_uid2 = fitted_model2.train_set_uid | ||
|
|
||
| # The train_set_uid should be the same due to caching | ||
| self.assertEqual(train_set_uid1, train_set_uid2) |
There was a problem hiding this comment.
The existing tests are correctly updated to handle the new return type of fit(). However, the new model_id functionality is not covered by any tests. Please consider adding new tests to verify that:
- The
model_idis correctly passed to thepredictendpoint. - Caching for predictions works correctly when different
model_ids are used for the same test data.
safaricd
left a comment
There was a problem hiding this comment.
From a high level, nothing major except of the two comments I left.
|
|
||
| cls.dataset_uid_cache_manager.add_dataset_uid(dataset_hash, train_set_uid) | ||
| return train_set_uid | ||
| return FittedModel(train_set_uid=train_set_uid, model_id=model_id) |
There was a problem hiding this comment.
Won't this break backwards compatibility for anyone using the train_set_uid?
| "Automatically re-uploading the train set is not supported for thinking mode. Please call fit()." | ||
| ) | ||
| train_set_uid = cls.fit( | ||
| fitted_model = cls.fit( |
There was a problem hiding this comment.
Could you please share more context on why do we re-fit it again in predict?
No description provided.