Back
sequential-finetuning-paths
Base/chat checkpoints, QLoRA adapter chaining, and sequential versus direct objectives.
RepositoryThis benchmark tests how training order and checkpoint choice affect downstream language model behavior.
The comparison covers direct fine-tuning and adapter chaining with QLoRA.
Metrics
BLEU, ROUGE, BERTScore
Stack
PyTorchQLoRAAxolotl
Images
Images can be added later under public/projects/.