OpenAI’s Whisper Learned 680,000 Hours Of Speech!

❤️ Check out Anyscale and try it for free here: 📝 The paper “Robust Speech Recognition via Large-Scale Weak Supervision“ is available here: Try it out (note: the Scholarly Stampede appears to be in order - we barely published the video and there are already longer wait times): Source code: Lex transcriptions by Andrej Karpathy: 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O’Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: Chapters: 0:00 Teaser 0:25 More features 0:40 Speed talking transcription 1:00 Accent transcription 1:28 96 more languages! 1:50 What about other methods? 2:05 680,000 hours! 2:14 Is this any good? 3:20 As good as humans? 4:32 The ultimate test! 5:15 What is all this good for? 6:13 2 more good news 6:40 So simple! 6:55 More training data Thumbnail background design: Felícia Zsolnai-Fehér - Károly Zsolnai-Fehér’s links: Instagram: Twitter: Web: ~zsolnai/ #openai
Back to Top