I tried to use the Progen2 model to embedding code the sequence, and undertake the subsequent classification task of CNN, but I observed the training results and found that the parameters were always:
train_loss=0.000,train_acc=1.000
What causes this? My vocab is using progen2 / tokenizer.json
I tried to use the Progen2 model to embedding code the sequence, and undertake the subsequent classification task of CNN, but I observed the training results and found that the parameters were always:
train_loss=0.000,train_acc=1.000
What causes this? My vocab is using progen2 / tokenizer.json