Rodrigo Pozo Lagos

Data Scientist · M.Sc. Candidate · Santiago, Chile

Back

sequential-finetuning-paths

Base/chat checkpoints, QLoRA adapter chaining, and sequential versus direct objectives.

Repository

This benchmark tests how training order and checkpoint choice affect downstream language model behavior.

The comparison covers direct fine-tuning and adapter chaining with QLoRA.

Metrics

BLEU, ROUGE, BERTScore

Stack

PyTorchQLoRAAxolotl

Images

Images can be added later under public/projects/.