Conversation
* Improve unit test suite with comprehensive function-level testing
…eature importance
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1beef31012
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| importance_scores.append(importance) | ||
|
|
||
| # Track validation and permuted log losses | ||
| val_log_loss = log_loss(y_val, lg.predict_proba(X_val), labels=lg.classes_) |
There was a problem hiding this comment.
Handle unseen validation classes in log-loss scoring
When a split omits a rare class from y_keep, the fitted classifier’s classes_ no longer covers all labels in y_val; this can happen with the unstratified train_test_split paths in _get_feature_importance. In that case, log_loss(y_val, ..., labels=lg.classes_) raises ValueError (and permutation_importance with scoring="neg_log_loss" can fail similarly), so feature_importance aborts on imbalanced/multiclass datasets instead of returning statistics.
Useful? React with 👍 / 👎.
Statistics have been added to this version