Conversation
|
Hi @npanczyk , thanks for a great work. I tested your branch on a clean venv and it worked like a charm on short (small search) and long (full search) environments. These are things to consider before merge: 1- After I installed pyMAISE on fresh venv, I had to install three more dependencies for shap. These were something we talked about to be added to pyMAISE default packages to install with these specific versions: slicer-0.0.8 2- Find a way to pass verbose=0 to all tensorflow prediction lines associated with SHAP methods. These are coming from 3- Let @myerspat knows to add a clear acknowledge in the github page and docs page to SHAP package that we are using their implementation. 4- For the example CHF notebook, since we do not have a detailed documentation yet for that capability yet, it is helpful to discuss something about the classes. For example, why you use |
myerspat
left a comment
There was a problem hiding this comment.
@npanczyk, all code looks great. I didn't review any of the SHAP package specific stuff but I'll leave that to you if you have any issues with it. Now we just need to update the documentation. There are four places that need to be updated with the new capabilities:
- In the landing page given by
docs/source/index.rstplease include a blurb somewhere that briefly explains the new explainability features. Copy this over toREADME.md. - In the installation guide at
docs/source/installation.rstplease include the new dependencies. - In the user guide under
docs/source/user_guide.rstinclude a section that fits within the order of things that outlines the new capabilities with any examples. - In the pyMAISE API reference at
docs/source/pymaise_api.rstinclude a section for explainability that links the methods/classes you want the user to be aware of. This will link to the methods/classes docstring so make sure those are up to snuff too.
Feel free to go through the docs and add any blurbs on explainabiity where you see fit. Also please address any of the other comments I've left.
pyMAISE/explain/_explain.py
Outdated
| """ | ||
| This functin fits a DeepLIFT explainer to evaluate SHAP coeffiicents (only for | ||
| neural networks). | ||
|
|
||
| :param nsamples: (int less than total samples in test set or None, default=None) | ||
| Number of samples used to estimate the DeepLIFT importances if it is | ||
| different than using all samples in X |
There was a problem hiding this comment.
For all functions that the user will be calling directly that are from SHAP should have their docstrings changed to the format pyMAISE uses.
There was a problem hiding this comment.
The user shouldn't call any functions directly from SHAP. Let me know if I'm missing something here.
…into explain_stash
docs/source/pymaise_api.rst
Outdated
| .. rubric:: Classes | ||
|
|
||
| .. autosummary:: | ||
| :toctree: stubs | ||
| :nosignatures: | ||
| :template: class.rst | ||
|
|
||
| pyMAISE.ShapExplainers |
There was a problem hiding this comment.
At least for me pyMAISE.ShapExplainers does not work in generating documentation. You can test this locally by moving to docs/ and running make html assuming you installed pyMAISE using pip install -e ".[dev]". This will build the HTML files that will be generated on readthedocs. You can open them at docs/build/html/. Also if ShapExplainers is the only class/function the user will need from the explain module consider just importing that in pyMAISE/__init__.py:
from pyMAISE.explain import ShapExplainersThe user should then be able to directly import ShapExplainers by from pyMAISE import ShapExplainers (test this to ensure this is true), and the above autosummary should work.
There was a problem hiding this comment.
Pretty sure I got this but I'm having issues getting sphinx to generate an autosummary for me. It's no longer throwing errors about the ShapExplainers thing though, I added it to the init.py as you suggested. Let me know if this still breaks for you
There was a problem hiding this comment.
I would add something at the top of this file referring to the explainability features.
There was a problem hiding this comment.
I thought that explainability kind of fell under the post-processing category here. If you think I should make it separate, I can add a "Step 6" but then this changes more of the document structure.
docs/source/user_guide.rst
Outdated
| - :meth:`pyMAISE.ShapExplainers.DeepLIFT`: fits a DeepLIFT explainer to evaluate SHAP coefficients, | ||
| - :meth:`pyMAISE.ShapExplainers.IntGradients`: fits an Integrated Gradient explainer to evaluate SHAP coefficients, | ||
| - :meth:`pyMAISE.ShapExplainers.KernelSHAP`: fits a KernelSHAP explainer to evaluate SHAP coefficients, | ||
| - :meth:`pyMAISE.ShapExplainers.Exact_SHAP`: fits an Exact SHAP explainer to evaluate SHAP coefficients, | ||
| - :meth:`pyMAISE.ShapExplainers.postprocess_results`: generates SHAP mean values for plotting functions, | ||
| - :meth:`pyMAISE.ShapExplainers.plot`: makes a beeswarm plot and a bar plot for each SHAP method or for a particular method, and | ||
| - :meth:`pyMAISE.ShapExplainers.plot_bar_only`: makes a bar plot for each or a particular SHAP method. |
There was a problem hiding this comment.
Check these links work in the generated documentation.
There was a problem hiding this comment.
The tuning and postprocessing results do not look correct here. I think you ran one iteration/epochs for all models here, which is not what we want to show in the final results. You should run a complete run of this benchmark with your explainability features. Feel free to use the parallelization capabilities to make the models tune faster (30 minutes tuning and 75 total, before explain, on AIMS01). Also, nowhere in the landing pages of pyMAISE do you point users to this benchmark as an example of how to use the new explainability features. I would add something to docs/source/index.rst and README.md. Also in your documentation under Explainability Metrics any specific references to function arguments, functions, and methods should have `` surrounding them to make them code and not standard text. Refer to MIT reactor benchmark for examples.
| return ax | ||
|
|
||
|
|
||
| class ShapExplainers: |
There was a problem hiding this comment.
Given that you're pointing users to this class, you should add a docstring discussing it and its parameters (refer to Tuner of PostProcessor for examples). You may also consider writing examples for using this (refer to Tuner for an example). Also if you look at the second to last code block on your CHF benchmark you see there are warnings. I know you are having issues getting this to go away, but for Jupyter notebooks, we can use the pyMAISE.utils._try_clear() functions to clear unwanted output, which is what I'm using pretty much everywhere in the tuner. This should clean some things up, as we shouldn't see any warnings for verbosity=0.
Co-authored-by: Patrick Myers <90068356+myerspat@users.noreply.github.com>
Co-authored-by: Patrick Myers <90068356+myerspat@users.noreply.github.com>
PR Description
This PR adds an explain class and the SHAP package to pyMAISE.
Closes: #67
What changes were made?
Will ultimately include:
Reviewers: @myerspat @mradaideh
Important
Please do not review SHAP package files! (I.e.,
explain/shap)