You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2Lines changed: 2 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,8 @@ This repository demonstrates the process of fine-tuning **LLAMA 3.2 1B** on a Py
6
6
- Training Framework: The training uses the SFTTrainer from the trl (Transformer Reinforcement Learning) library.
7
7
- Parameter Optimization: QLoRA (Low-Rank Adaptation) is applied to reduce the number of parameters and improve efficiency during the fine-tuning process.
8
8
9
+
### Evaluation
10
+
Run eval_ollama_8B.ipynb to score the model's performance.
9
11
10
12
### Interactive API with chainlit
11
13
Interact with the fine-tuned model through a web API by running the command **chainlit run app.py**. This will launch an interactive interface for the model.
0 commit comments