REALM: Retrieval-Augmented Language Model Pre-training - Super Knowledge Retriever

This paper introduces a way to use unsupervised learning to train a neural knowledge retriever that achieves state-of-the-art results in three popular Open-QA benchmarks, and it outperforms all previous methods by a significant margin (4-16�solute accuracy). 0:00 - Intro 2:30 - Language models v.s world knowledge 5:16 - REALM: Retrieve-then-predict 7:13 - Masked language modelling for knowledge retrieval 10:54 - Neural lnowledge retriever 16:11 - Knowledge-augmented encoder 17:00 - Pre-training: MLM
Back to Top