This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.
As of November 2023 (this field moves fast!).
Context: This video is based on the slides of a talk I gave recently at the AI Security Summit. The talk was not recorded but a lot of people came to me after and told me they liked it. Seeing as I had already put in one long weekend of work to make the slides, I decided to just tune them a bit, record this round 2 of the talk and upload it here on YouTube. Pardon the random background, that’s my hotel room during the thanksgiving break.
- Slides as PDF: (42MB)
- Slides. as Keynote: (140MB)
Few things I wish I said (I’ll add items here as they come up):
- The dreams and hallucinations do not get fixed with finetuning. Finetuning just “directs“ the dreams into “helpful assistant dreams“. Always be careful with what LLMs tell you, especially if they are telling you something from memory alone. That said, similar to a human, if the LLM used browsing or retrieval and the answer made its way into the “working memory“ of its context window, you can trust the LLM a bit more to process that information into the final answer. But TLDR right now, do not trust what LLMs say or do. For example, in the tools section, I’d always recommend double-checking the math/code the LLM did.
- How does the LLM use a tool like the browser? It emits special words, e.g. |BROWSER|. When the code “above“ that is inferencing the LLM detects these words it captures the output that follows, sends it off to a tool, comes back with the result and continues the generation. How does the LLM know to emit these special words? Finetuning datasets teach it how and when to browse, by example. And/or the instructions for tool use can also be automatically placed in the context window (in the “system message”).
- You might also enjoy my 2015 blog post “Unreasonable Effectiveness of Recurrent Neural Networks“. The way we obtain base models today is pretty much identical on a high level, except the RNN is swapped for a Transformer.
- What is in the run.c file? A bit more full-featured 1000-line version hre:
Chapters:
Part 1: LLMs
00:00:00 Intro: Large Language Model (LLM) talk
00:00:20 LLM Inference
00:04:17 LLM Training
00:08:58 LLM dreams
00:11:22 How do they work?
00:14:14 Finetuning into an Assistant
00:17:52 Summary so far
00:21:05 Appendix: Comparisons, Labeling docs, RLHF, Synthetic data, Leaderboard
Part 2: Future of LLMs
00:25:43 LLM Scaling Laws
00:27:43 Tool Use (Browser, Calculator, Interpreter, DALL-E)
00:33:32 Multimodality (Vision, Audio)
00:35:00 Thinking, System 1/2
00:38:02 Self-improvement, LLM AlphaGo
00:40:45 LLM Customization, GPTs store
00:42:15 LLM OS
Part 3: LLM Security
00:45:43 LLM Security Intro
00:46:14 Jailbreaks
00:51:30 Prompt Injection
00:56:23 Data poisoning
00:58:37 LLM Security conclusions
End
00:59:23 Outro
1 view
1830
643
1 year ago 00:59:48 51
[1hr Talk] Intro to Large Language Models
4 years ago 00:58:51 10
ASMR (1HR, NO TALKING) 💋 Kissies Inbetween Your Ears ~ Tascam 😍
9 months ago 01:00:00 1
ASMR || 1HR of all black triggers (no talking)
7 years ago 01:18:39 2
1Hr No Talking ASMR Blue Yeti Tapping and Scratching ~
9 years ago 01:10:16 22
Lovely Lor - ASMR Squishies (Optional crinkles, no talking, squish sounds,1hr) /женский/ /звуки/ /предметы/ /руки/
4 years ago 01:00:31 1
[ASMR] 120 Best Triggers For Sleep & Deep Relaxation 😴 1Hr (No Talking)
2 years ago 01:04:20 1
Palaio Faliro 1hr walk in Athens, Greece
2 years ago 01:01:43 8
Cosy Coffee Shop Ambience ☕ READ WITH ME English countryside, soft piano music, no talking, 1hr
6 years ago 01:19:32 2
ASMR 1hr Drawing Map of Antarctica | History of Exploration