Hi,
Thanks for your good work. I have some questions when running the code:
The Parameter Size I obtained under all quantization settings is 18.48MB based on the EM dataset. According to your paper, The Parameter Size should be smaller than Full Precision. Could you please help me analyze the possible reasons? Thanks a lot!
For example, when I want to run UNET_EM_DATASET_a8_0_w0_8, the config.yaml is set to:
UNET:
dataset: 'emdataset'
lr: 0.001
num_epochs: 200
model_type: "unet"
init_type: glorot
quantization: "FIXED" # "INT", "BNN", "Normal", "FIXED"
activation_f_width: 0
activation_i_width: 6
weight_f_width: 4
weight_i_width: 0
gpu_core_num: 1
activation: "tanh"
trained_model: "./em_tanh_a6_0_w0_4/em_tanh_a6_0_w0_4.pkl"
experiment_name: "em_tanh_a6_0_w0_4"
log_output_dir: "./results/"
operation_mode: "normal" # normal, visualize, retrain, inference
Then, I run the code:
python em_unet.py -f config.yaml -t UNET
The environment I use:
Linux
Python 3.5.2
Nvidia 2080Ti GPU