Fine-tuning
The process of further training a pre-trained LLM model on custom data to specialize it for a specific task or style.
What is fine-tuning?
Fine-tuning is the process of taking an existing pre-trained LLM model (e.g., GPT-4o mini) and further training it on your own data so that it specializes in a specific task, style, or domain.
Fine-tuning vs. RAG
| Fine-tuning | RAG |
|---|---|
| - Knowledge is "baked into" the model | - Knowledge is loaded at runtime from a database |
| - Suitable for style and behavior | - Suitable for current and specific facts |
| - Requires retraining on updates | - Just update the database |
| - Higher upfront cost | - Lower upfront cost |
When to use fine-tuning
- Adapting the tone and style of communication (e.g., formal vs. informal)
- Training on a specific output format
- Specializing in a narrow domain (legal, medical language)