Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
This video explores the T5 large-scale study on Transfer Learning. This paper takes apart many different factors of the Pre-Training then Fine-Tuning pipeline for NLP. This involves Auto-Regressive Language Modeling vs. BERT-Style Masked Language Modeling and XLNet-style shuffling, as well as the impact of dataset composition, size, and how to best use more computation. Thanks for watching and please check out Machine Learning Street Talk where Tim Scarfe, Yannic Kilcher and I discuss this paper!
Machine L
1 view
19
5
2 weeks ago 00:05:32 1
Queen - Bohemian Rhapsody (Live at Rock Montreal, 1981) [HD]
2 weeks ago 00:08:10 1
AI Agents Will Create MILLIONAIRES in 2025 – Are You Ready
2 weeks ago 00:03:02 1
SPX Options Trading : Strategies for Big Gains!
2 weeks ago 00:13:15 1
BREAKING: MASS EXODUS Of Soldiers Rock IDF After BLOODIEST DAY EVER in Lebanon