#94 – Ilya Sutskever: Deep Learning
from Lex Fridman Podcast
by Lex Fridman
Published: Fri May 08 2020
Show Notes
Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Ilya’s Twitter: https://twitter.com/ilyasut
Ilya’s Website: https://www.cs.toronto.edu/~ilya/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on ApplePodcasts, follow on Spotify, or support it on Patreon.
Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
– Introduction
– AlexNet paper and the ImageNet moment
– Cost functions
– Recurrent neural networks
– Key ideas that led to success of deep learning
– What’s harder to solve: language or vision?
– We’re massively underestimating deep learning
– Deep double descent
– Backpropagation
– Can neural networks be made to reason?
– Long-term memory
– Language models
– GPT-2
– Active learning
– Staged release of AI systems
– How to build AGI?
– Question to AGI
– Meaning of life