Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
120 changes: 120 additions & 0 deletions misc/volume_shrink_example.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "75e486d9",
"metadata": {},
"outputs": [],
"source": [
"from cil.io import NEXUSDataReader\n",
"from cil.utilities.display import show2D\n",
"from cil.utilities.shrink_volume import VolumeShrinker\n",
"\n",
"import logging"
]
},
{
"cell_type": "markdown",
"id": "947637c4",
"metadata": {},
"source": [
"Load an example dataset\n",
"\n",
"Data for this notebook are available on the tomography shared drive"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d8dd3579",
"metadata": {},
"outputs": [],
"source": [
"file_name = '/mnt/ALC_ptychography_alignment/simulations/cylinder_tilt_30.nxs'\n",
"data = NEXUSDataReader(file_name=file_name).read()\n",
"show2D(data, slice_list=('angle',0))"
]
Comment thread
hrobarts marked this conversation as resolved.
},
{
"cell_type": "markdown",
"id": "e156bcab",
"metadata": {},
"source": [
"First we use the default `method=manual` and choose the reconstruction volume size using `limits`. We can check how this looks on the reconstructed dataset by choosing `preview=True`. If we do not include a dimension in the `limits` dictionary, it assumes we use the full size of that dimension. Try manually adjusting the limits to see if you can select all the sample material in the reconstruction."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59b9f17b",
"metadata": {},
"outputs": [],
"source": [
"vs = VolumeShrinker(data, recon_backend='astra')\n",
"ig_reduced = vs.run(limits={'horizontal_x':(10, 150), 'vertical':(5, 50)}, preview=True)"
]
},
Comment thread
Neonbluestoplight marked this conversation as resolved.
{
"cell_type": "markdown",
"id": "be39a064",
"metadata": {},
"source": [
"Next we try using an intensity `threshold` to choose the limits automatically. We can try turning on CIL logging to see how the threshold is being applied to the data. Try different `threshold` values to see how it affects the limits on the reconstruction. Here we are using the kwarg `buffer` to add a buffer of 10 pixels around the limits which are found."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e34e9143",
"metadata": {},
"outputs": [],
"source": [
"cil_log_level = logging.getLogger()\n",
"cil_log_level.setLevel(logging.DEBUG)\n",
"vs = VolumeShrinker(data, recon_backend='astra')\n",
"ig_reduced = vs.run(method='threshold', preview=True, threshold=0.9, buffer=10)"
]
},
{
"cell_type": "markdown",
"id": "84496735",
"metadata": {},
"source": [
"Finally we can use the `method=otsu` to automatically choose the threshold using Otsu's method. We can use the kwarg `otsu_classes` to specify how many different material types there are in the data. And `min_component_size` to specify the minimum size of a connected component to be included in the limits - this can be useful to exclude small noisy hotspots in the data from being included in the limits. Try adjusting these parameters to see how it affects the limits on the reconstruction. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eb564369",
"metadata": {},
"outputs": [],
"source": [
"vs = VolumeShrinker(data, recon_backend='astra')\n",
"ig_reduced = vs.run(method='otsu', preview=True, otsu_classes=2, buffer=10, min_component_size=10)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "cil_test_demos",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}