I want to evaluate the model (I downloaded weights from google disk) , however unfortunately during testing the following error occurs:
RuntimeError: Error(s) in loading state_dict for Train_MIME:
Missing key(s) in state_dict: "encoder.enc.multi_head_attention.query_linear.weight",
"encoder.enc.multi_head_attention.key_linear.weight", "encoder.enc.multi_head_attention.value_linear.weight",
"encoder.enc.multi_head_attention.output_linear.weight", "encoder.enc.positionwise_feed_forward.layers.0.conv.weight",
"encoder.enc.positionwise_feed_forward.layers.0.conv.bias", "encoder.enc.positionwise_feed_forward.layers.1.conv.weight",
"encoder.enc.positionwise_feed_forward.layers.1.conv.bias", "encoder.enc.layer_norm_mha.gamma",
"encoder.enc.layer_norm_mha.beta", "encoder.enc.layer_norm_ffn.gamma", "encoder.enc.layer_norm_ffn.beta",
"emotion_input_encoder_1.enc.enc.multi_head_attention.query_linear.weight",
"emotion_input_encoder_1.enc.enc.multi_head_attention.key_linear.weight",
"emotion_input_encoder_1.enc.enc.multi_head_attention.value_linear.weight",
"emotion_input_encoder_1.enc.enc.multi_head_attention.output_linear.weight",
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.0.conv.weight",
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.0.conv.bias",
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.1.conv.weight",
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.1.conv.bias",
"emotion_input_encoder_1.enc.enc.layer_norm_mha.gamma", "emotion_input_encoder_1.enc.enc.layer_norm_mha.beta",
"emotion_input_encoder_1.enc.enc.layer_norm_ffn.gamma", "emotion_input_encoder_1.enc.enc.layer_norm_ffn.beta",
"emotion_input_encoder_2.enc.enc.multi_head_attention.query_linear.weight",
"emotion_input_encoder_2.enc.enc.multi_head_attention.key_linear.weight",
"emotion_input_encoder_2.enc.enc.multi_head_attention.value_linear.weight",
"emotion_input_encoder_2.enc.enc.multi_head_attention.output_linear.weight",
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.0.conv.weight",
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.0.conv.bias",
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.1.conv.weight",
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.1.conv.bias",
"emotion_input_encoder_2.enc.enc.layer_norm_mha.gamma", "emotion_input_encoder_2.enc.enc.layer_norm_mha.beta",
"emotion_input_encoder_2.enc.enc.layer_norm_ffn.gamma", "emotion_input_encoder_2.enc.enc.layer_norm_ffn.beta".
Unexpected key(s) in state_dict: "encoder.enc.0.multi_head_attention.query_linear.weight",
"encoder.enc.0.multi_head_attention.key_linear.weight", "encoder.enc.0.multi_head_attention.value_linear.weight",
"encoder.enc.0.multi_head_attention.output_linear.weight", "encoder.enc.0.positionwise_feed_forward.layers.0.conv.weight",
"encoder.enc.0.positionwise_feed_forward.layers.0.conv.bias",
"encoder.enc.0.positionwise_feed_forward.layers.1.conv.weight",
"encoder.enc.0.positionwise_feed_forward.layers.1.conv.bias", "encoder.enc.0.layer_norm_mha.gamma",
"encoder.enc.0.layer_norm_mha.beta", "encoder.enc.0.layer_norm_ffn.gamma", "encoder.enc.0.layer_norm_ffn.beta",
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.query_linear.weight",
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.key_linear.weight",
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.value_linear.weight",
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.output_linear.weight",
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.0.conv.weight",
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.0.conv.bias",
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.1.conv.weight",
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.1.conv.bias",
"emotion_input_encoder_1.enc.enc.0.layer_norm_mha.gamma", "emotion_input_encoder_1.enc.enc.0.layer_norm_mha.beta",
"emotion_input_encoder_1.enc.enc.0.layer_norm_ffn.gamma", "emotion_input_encoder_1.enc.enc.0.layer_norm_ffn.beta",
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.query_linear.weight",
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.key_linear.weight",
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.value_linear.weight",
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.output_linear.weight",
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.0.conv.weight",
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.0.conv.bias",
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.1.conv.weight",
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.1.conv.bias",
"emotion_input_encoder_2.enc.enc.0.layer_norm_mha.gamma", "emotion_input_encoder_2.enc.enc.0.layer_norm_mha.beta",
"emotion_input_encoder_2.enc.enc.0.layer_norm_ffn.gamma", "emotion_input_encoder_2.enc.enc.0.layer_norm_ffn.beta".