Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,15 @@
"validate (model_loaded, criterion)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_loaded"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -150,25 +159,32 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [],
"source": [
"from tqdm.notebook import tqdm"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"imgs = []\n",
"intermediate_activations = []\n",
"total_correct = 0\n",
"\n",
"model_loaded.eval()\n",
"for i, (images, labels) in enumerate(data_test_loader2):\n",
" imgs.append(((np.reshape(np.squeeze(images.detach().numpy()), (1,-1)) )))\n",
" x = images\n",
" x = model_loaded.convnet(x) \n",
" \n",
" intermediate_activations.append(((np.reshape(np.squeeze(x.detach().numpy()), (1,-1)) )))\n",
" \n",
" np.save(\"images\", np.array(imgs).squeeze(1))\n",
" np.save(\"intermediate_act\", np.array(intermediate_activations).squeeze(1))"
"with torch.no_grad():\n",
" for i, (images, labels) in tqdm(enumerate(data_test_loader2), total=len(data_test_loader2)):\n",
" imgs.append(images.squeeze(0).view(1,-1))\n",
"\n",
" x = model_loaded.convnet(images)\n",
" intermediate_activations.append(x.view(1,120))\n",
"\n",
" np.save(\"images\", torch.cat(imgs).numpy())\n",
" np.save(\"intermediate_act\", torch.cat(intermediate_activations).numpy())"
]
},
{
Expand Down Expand Up @@ -217,6 +233,17 @@
"#### Mutual Information function"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will calculate self-information and mutual information of the first 1,000 test images. You can remove the slicing to calculate on full 10,000 test images but it will take longer to run.\n",
"\n",
"$$\n",
"I(X;Y) = \\sum\\limits_{(x,y) \\in \\mathcal{X}\\times\\mathcal{Y}} P_{XY}(x,y) \\log \\frac{P_{XY}(x,y)}{P_X(x)P_Y(y)}\n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -240,7 +267,7 @@
"outputs": [],
"source": [
"ds = np.array([1024, 1024])\n",
"y = np.concatenate((images_raw, images_raw),axis=1)\n",
"y = np.concatenate((images_raw[:1000], images_raw[:1000]),axis=1)\n",
"print(y.shape)\n",
"i = co.estimation(y, ds) \n",
"print(i)"
Expand All @@ -260,7 +287,7 @@
"outputs": [],
"source": [
"ds = np.array([1024, 120])\n",
"y = np.concatenate((images_raw, intermediate_activation),axis=1)\n",
"y = np.concatenate((images_raw[:1000], intermediate_activation[:1000]),axis=1)\n",
"print(y.shape)\n",
"i = co.estimation(y, ds) \n",
"print(i)"
Expand All @@ -270,7 +297,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that the raw image contained 730 bits of self-information, whereas the intermediate activations only contain 303 bits of information that was originally in the raw image (the 730 bits). This shows that the first layers of the neural network, alone, have degraded more than half of the original information in the raw input. "
"We can see that the raw image contained 455 bits of self-information, whereas the intermediate activations only contain 163 bits of information that was originally in the raw image (the 455 bits). This shows that the first layers of the neural network, alone, have degraded more than half of the original information in the raw input. "
]
}
],
Expand All @@ -279,20 +306,8 @@
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
}