Hi,
Thank you for sharing your code. I'm trying to run it in a low-shot scenario using the GPT-J model, where the number of training samples is typically small. However, I'm facing difficulty generating answers in the expected format (question###answer@@@).
For instance, in the car dataset (data_id=40975), when the number of training samples is 64, the prompt and generated response appear as follows:
'When buying = high, maint = high, doors = 4, persons = 2, lug boot = med, safety = high, How would you rate the decision to buy this car?###@ unacceptable@@### acceptable good good@@@@ very'
As you can see, the output doesn't match the expected format.
Have you encountered this issue as well? If so, could you share any insights or potential solutions to resolve it?
Thank you for your assistance!
Hi,
Thank you for sharing your code. I'm trying to run it in a low-shot scenario using the GPT-J model, where the number of training samples is typically small. However, I'm facing difficulty generating answers in the expected format (question###answer@@@).
For instance, in the car dataset (data_id=40975), when the number of training samples is 64, the prompt and generated response appear as follows:
'When buying = high, maint = high, doors = 4, persons = 2, lug boot = med, safety = high, How would you rate the decision to buy this car?###@ unacceptable@@### acceptable good good@@@@ very'
As you can see, the output doesn't match the expected format.
Have you encountered this issue as well? If so, could you share any insights or potential solutions to resolve it?
Thank you for your assistance!