This error gave me a headache. This is for future. I am writing the solution here.
Step 1: Make sure you are using the latest version of Transformers

Step 2: Instead of ‘evaluation_strategy’, Use eval_strategy.
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=5,
eval_strategy="epoch",
eval_steps=100,
logging_strategy="epoch",
logging_steps=50,
save_strategy="epoch",
learning_rate=2e-5,
weight_decay=0.01,
warmup_ratio=0.1,
gradient_accumulation_steps=2,
max_grad_norm=1.0,
fp16=True,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
)
If Above still doesn’t work, then use this one:
# 1. Legacy-style TrainingArguments
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
learning_rate=2e-5,
weight_decay=0.01,
warmup_ratio=0.1,
gradient_accumulation_steps=2,
max_grad_norm=1.0,
fp16=True,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
eval_strategy="epoch", # legacy alias for evaluation_strategy
eval_steps=100,
logging_strategy="epoch", # legacy alias for logging_strategy
logging_steps=50,
save_strategy="epoch", # legacy alias for save_strategy
)