You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-25Lines changed: 24 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,44 +45,44 @@ You can run iCatcher+ with the command:
45
45
46
46
`icatcher --help`
47
47
48
-
which will list all available options. The description below will help you get more familiar with some common command line arguments.
48
+
Which will list all available options. Below we list some common options to help you get more familiar with iCatcher+. The pipeline is highly configurable, please see [the website](https://icatcherplus.github.io/) for more explanation about the flags.
49
49
50
50
### Annotating a Video
51
51
To produce annotations for a video file (if a folder is provided, all videos will be used for prediction):
52
52
53
53
`icatcher /path/to/my/video.mp4`
54
54
55
-
>**NOTE:** For any videos you wish to visualize with the web app, you must use the `--ui_packaging_path` flag:
**NOTE:** For any videos you wish to visualize with the [Web App](#web-app), you must use the `--output_annotation` and the `--output_format ui` flags:
To show the predictions online in a seperate window, add the option:
79
+
The app should open automatically at [http://localhost:5001](http://localhost:5001). For more details, see [Web App](#web-app).
82
80
83
-
`--show_output`
81
+
- Originally a face classifier was used to distinguish between adult and infant faces (however this can result in too much loss of data). It can be turned on by using:
82
+
83
+
`icatcher /path/to/my/video.mp4 --use_fc_model`
84
84
85
-
You can also add parameters to crop the video a given percent before passing to iCatcher:
85
+
-You can also add parameters to crop the video a given percent before passing to iCatcher:
86
86
87
87
`--crop_mode m` where `m` is any of [top, left, right], specifying which side of the video to crop from (if not provided, default is none; if crop_percent is provided but not crop_mode, default is top)
88
88
@@ -94,22 +94,21 @@ Currently we supports 3 output formats, though further formats can be added upon
94
94
95
95
-**raw_output:** a file where each row will contain the frame number, the class prediction and the confidence of that prediction seperated by a comma
96
96
-**compressed:** a npz file containing two numpy arrays, one encoding the predicted class (n x 1 int32) and another the confidence (n x 1 float32) where n is the number of frames. This file can be loaded into memory using the numpy.load function. For the map between class number and name see test.py ("predict_from_video" function).
97
-
-**ui_output:** needed to open a video in the web app; produces a directory of the following structure
97
+
-**ui:** needed for viewing results in the web app; produces a directory of the following structure
98
98
99
99
├── decorated_frames # dir containing annotated jpg files for each frame in the video
100
-
├── video.mp4 # the original video
101
-
├── labels.txt # file containing annotations in the `raw_output` form described above
100
+
├── labels.txt # file containing annotations in the `raw_output` format described above
102
101
103
102
# Web App
104
103
The iCatcher+ app is a tool that allows users to interact with output from the iCatcher+ ML pipeline in the browser. The tool is designed to operate entirely locally and will not upload any input files to remote servers.
105
104
106
105
### Using the UI
107
106
108
-
When you open the iCatcher+ UI, you will be met with a pop-up inviting you to upload your video directory. Please note, this requires you to upload *the whole output directory* which should include a `labels.txt` file and a sub-directory containing all of the frame images from the video.
107
+
When you open the iCatcher+ UI, you will be met with a pop-up inviting you to upload a directory. Please note, this requires you to upload *the whole output directory* which should include a `labels.txt` file and a sub-directory named `decorated_frames`containing all of the frames of the video as image files.
109
108
110
-
Once you've submitted the video, you should see a pop-up asking if you want to upload the whole video. Rest assured, this will not upload those files through the internet or to any remote servers. This is only giving the local browser permission to access those files. The files will stay local to whatever computer is running the browser.
109
+
Once you've uploaded a directory, you should see a pop-up asking whether you are sure want to upload all files. Rest assured, this will not upload the files to any remote servers. This is only giving the local browser permission to access those files. The files will stay local to whatever computer is running the browser.
111
110
112
-
At this point, you should see your video on the screen (you may need to give it a few second to load). Now you can start to review your annotations. Below the video you'll see heatmaps giving you a visual overview of the labels for each frame, as well as the confidence level for each frame.
111
+
At this point, you should see the video on the screen (you may need to give it a few second to load). Now you can start to review the annotations. Below the video you'll see heatmaps giving you a visual overview of the labels for each frame, as well as the confidence level for each frame.
0 commit comments