Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at as well as Rationality: From AI to Zombies.
In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people that think we’re headed off a cliff and the people that think it’s not a big deal.
(0:00) Intro
(1:18) Welcome Eliezer
(6:27) How would you define artificial intelligence?
(15:50) What is the purpose of a firm alarm?
(19:29) Eliezer’s background
(29:28) The Singularity Institute for Artificial Intelligence
(33:38) Maybe AI doesn’t end up automatically doing the right thing
(45:42) AI Safety Conference
(51:15) Disaster Monkeys
(1:02:15) Fast takeoff
(1:10:29) Loss function
(1:15:48) Protein folding
(1:24:55) The deadly stuff
(1:46:41) Why is it inevitable?
(1:54:27) Can’t we let tech develop AI and then fix the problems?
(2:02:56) What were the big jumps between GPT3 and GPT4?
(2:07:15) “The trajectory of AI is inevitable”
(2:28:05) Elon Musk and OpenAI
(2:37:41) Sam Altman Interview
(2:50:38) The most optimistic path to us surviving
(3:04:46) Why would anything super intelligent pursue ending humanity?
(3:14:08) What role do VCs play in this?
Show Notes:
Eliezer Yudkowsky – AI Alignment: Why It’s Hard, and Where to Start
Mixed and edited: Justin Hrabovsky
Produced: Rashad Assir
Executive Producer: Josh Machiz
Music: Griff Lawson
🎙 Listen to the show
Apple Podcasts:
Spotify:
Google Podcasts:
🎥 Subscribe on YouTube:
Follow on Socials
📸 Instagram -
🐦 Twitter -
🎬 Clips on TikTok - @theloganbartlettshow
About the Show
Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you’re interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
1 view
278
77
11 years ago 00:42:48 628
Heuristics and Biases, Skepticon 4 Eliezer Yudkowsky
11 years ago 00:28:03 29
Eliezer Yudkowsky: “Cognitive Biases and Giant Risks“
2 years ago 02:43:14 6
Discussion / debate with AI expert Eliezer Yudkowsky
2 years ago 00:46:14 1
Eliezer Yudkowsky “Friendly AI“
1 year ago 01:35:45 1
George Hotz vs Eliezer Yudkowsky AI Safety Debate
2 years ago 03:12:41 2
Eliezer Yudkowsky on if Humanity can Survive AI
1 year ago 03:17:51 3
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
1 year ago 01:17:09 1
Eliezer Yudkowsky on the Dangers of AI 5/8/23
2 years ago 01:49:22 5
159 - We’re All Gonna Die with Eliezer Yudkowsky
2 years ago 00:35:17 1
Eliezer Yudkowsky on “Three Major Singularity Schools“
2 years ago 04:03:25 2
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
7 years ago 01:29:56 3
Eliezer Yudkowsky – AI Alignment: Why It’s Hard, and Where to Start
13 years ago 00:02:01 116
Trailer - A Short Film
1 year ago 00:06:44 1
Sorting Pebbles Into Correct Heaps - A Short Story By Eliezer Yudkowsky
2 years ago 00:09:23 1
Eliezer Yudkowsky on the hard problem of consiousness | Lex Fridman Podcast Clips
1 year ago 00:10:33 1
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
2 years ago 00:17:08 1
Max Tegmark response to Eliezer Yudkowsky | Lex Fridman Podcast Clips
2 years ago 00:07:20 13
The Power of Intelligence - An Essay by Eliezer Yudkowsky
6 years ago 00:37:25 97
Singer on effective altruism, vegetarianism, philosophy and favorite books. Book Person #27
1 year ago 00:03:32 2
The Parable of the Dagger
4 months ago 00:12:48 1
That Alien Message
2 years ago 02:02:09 50
Дискуссия “Искусственный интеллект: возможности и риски“; Шоу-игра “Не бешеные эксперты“ 1 выпуск
5 years ago 00:02:53 76
One More Light - Harry Potter and the Methods of Rationality (HPMOR) animatic
5 years ago 00:34:43 28
Peter Singer on effective altruism, veganism, philosophy and best books