Fine-tuning Llama 2 on Your Own Dataset | Train an LLM for Your Use Case with QLoRA on a Single GPU
Full text tutorial (requires MLExpert Pro):
Learn how to fine-tune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We’ll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter.
Discord:
Prepare for the Machine Learning interview:
Subscribe:
GitHub repository:
Join this channel to get access to the perks and support my work:
00:00 - When to Fine-tune an LLM?
00:30 - Fine-tune vs Retrieval Augmented Generation (Custom Knowledge Base)
03:38 - Text Summarization (our example)
04:14 - Text Tutorial on
04:47 - Dataset Selection
05:36 - Choose a M
1 view
744
196
1 month ago 00:19:44 1
Chat Fine tuning
3 months ago 00:15:17 1
LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌
3 months ago 00:05:58 1
Fine Tune Llama 3.1 with Your Data
3 months ago 00:15:08 1
LLAMA-3.1 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌