-
#88 – Eric Weinstein: Geometric Unity and the Call for New Ideas, Leaders & InstitutionsFrom 🇺🇸 Lex Fridman Podcast, published at 2020-04-13 20:50
Eric Weinstein is a mathematician with a bold and piercing intelligence, unafraid to explore the biggest questions in the universe and shine a light on the darkest corners of our society. He is the host of The Portal podcast, a part of which, he recently released his 2013 Oxford lecture on his theory of Geometric Unity that is at the center of his lifelong efforts in arriving at a theory of everything that unifies the fundamental laws of physics. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Eric’s Twitter: https://twitter.com/EricRWeinstein Eric’s YouTube: https://www.youtube.com/ericweinsteinphd The Portal podcast: https://podcasts.apple.com/us/podcast/the-portal/id1469999563 Graph, Wall, Tome wiki: https://theportal.wiki/wiki/Graph,_Wall,_Tome This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:08 – World War II and the Coronavirus Pandemic 14:03 – New leaders 31:18 – Hope for our time 34:23 – WHO 44:19 – Geometric unity 1:38:55 – We need to get off this planet 1:40:47 – Elon Musk 1:46:58 – Take Back MIT 2:15:31 – The time at Harvard 2:37:01 – The Portal 2:42:58 – Legacy
-
#87 – Richard Dawkins: Evolution, Intelligence, Simulation, and MemesFrom 🇺🇸 Lex Fridman Podcast, published at 2020-04-09 22:35
Richard Dawkins is an evolutionary biologist, and author of The Selfish Gene, The Blind Watchmaker, The God Delusion, The Magic of Reality, The Greatest Show on Earth, and his latest Outgrowing God. He is the originator and popularizer of a lot of fascinating ideas in evolutionary biology and science in general, including funny enough the introduction of the word meme in his 1976 book The Selfish Gene, which in the context of a gene-centered view of evolution is an exceptionally powerful idea. He is outspoken, bold, and often fearless in his defense of science and reason, and in this way, is one of the most influential thinkers of our time. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Richard’s Website: https://www.richarddawkins.net/ Richard’s Twitter: https://twitter.com/RichardDawkins Richard’s Books: – Selfish Gene: https://amzn.to/34tpHQy – The Magic of Reality: https://amzn.to/3c0aqZQ – The Blind Watchmaker: https://amzn.to/2RqV5tH – The God Delusion: https://amzn.to/2JPrxlc – Outgrowing God: https://amzn.to/3ebFess – The Greatest Show on Earth: https://amzn.to/2Rp2j1h This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:31 – Intelligent life in the universe 05:03 – Engineering intelligence (are there shortcuts?) 07:06 – Is the evolutionary process efficient? 10:39 – Human brain and AGI 15:31 – Memes 26:37 – Does society need religion? 33:10 – Conspiracy theories 39:10 – Where do morals come from in humans? 46:10 – AI began with the ancient wish to forge the gods 49:18 – Simulation 56:58 – Books that influenced you 1:02:53 – Meaning of life
-
#86 – David Silver: AlphaGo, AlphaZero, and Deep Reinforcement LearningFrom 🇺🇸 Lex Fridman Podcast, published at 2020-04-03 21:05
David Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo, AlphaZero and co-lead on AlphaStar, and MuZero and lot of important work in reinforcement learning. Support this podcast by signing up with these sponsors: – MasterClass: https://masterclass.com/lex – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Reinforcement learning (book): https://amzn.to/2Jwp5zG This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 04:09 – First program 11:11 – AlphaGo 21:42 – Rule of the game of Go 25:37 – Reinforcement learning: personal journey 30:15 – What is reinforcement learning? 43:51 – AlphaGo (continued) 53:40 – Supervised learning and self play in AlphaGo 1:06:12 – Lee Sedol retirement from Go play 1:08:57 – Garry Kasparov 1:14:10 – Alpha Zero and self play 1:31:29 – Creativity in AlphaZero 1:35:21 – AlphaZero applications 1:37:59 – Reward functions 1:40:51 – Meaning of life
-
#85 – Roger Penrose: Physics of Consciousness and the Infinite UniverseFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-31 14:33
Roger Penrose is physicist, mathematician, and philosopher at University of Oxford. He has made fundamental contributions in many disciplines from the mathematical physics of general relativity and cosmology to the limitations of a computational view of consciousness. Support this podcast by signing up with these sponsors: – ExpressVPN at https://www.expressvpn.com/lexpod – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Cycles of Time (book): https://amzn.to/39tXtpp The Emperor’s New Mind (book): https://amzn.to/2yfeVkD This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 03:51 – 2001: A Space Odyssey 09:43 – Consciousness and computation 23:45 – What does it mean to “understand” 31:37 – What’s missing in quantum mechanics? 40:09 – Whatever consciousness is, it’s not a computation 44:13 – Source of consciousness in the human brain 1:02:57 – Infinite cycles of big bangs 1:22:05 – Most beautiful idea in mathematics
-
#83 – Nick Bostrom: Simulation and SuperintelligenceFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-26 00:19
Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick’s website: https://nickbostrom.com/ Future of Humanity Institute: – https://twitter.com/fhioxford – https://www.fhi.ox.ac.uk/ Books: – Superintelligence: https://amzn.to/2JckX83 Wikipedia: – https://en.wikipedia.org/wiki/Simulation_hypothesis – https://en.wikipedia.org/wiki/Principle_of_indifference – https://en.wikipedia.org/wiki/Doomsday_argument – https://en.wikipedia.org/wiki/Global_catastrophic_risk This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:48 – Simulation hypothesis and simulation argument 12:17 – Technologically mature civilizations 15:30 – Case 1: if something kills all possible civilizations 19:08 – Case 2: if we lose interest in creating simulations 22:03 – Consciousness 26:27 – Immersive worlds 28:50 – Experience machine 41:10 – Intelligence and consciousness 48:58 – Weighing probabilities of the simulation argument 1:01:43 – Elaborating on Joe Rogan conversation 1:05:53 – Doomsday argument and anthropic reasoning 1:23:02 – Elon Musk 1:25:26 – What’s outside the simulation? 1:29:52 – Superintelligence 1:47:27 – AGI utopia 1:52:41 – Meaning of life
-
#82 – Simon Sinek: Leadership, Hard Work, Optimism and the Infinite GameFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-21 18:25
Simon Sinek is an author of several books including Start With Why, Leaders Eat Last, and his latest The Infinite Game. He is one of the best communicators of what it takes to be a good leader, to inspire, and to build businesses that solve big difficult challenges. Support this podcast by signing up with these sponsors: – MasterClass: https://masterclass.com/lex – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Simon twitter: https://twitter.com/simonsinek Simon facebook: https://www.facebook.com/simonsinek Simon website: https://simonsinek.com/ Books: – Infinite Game: https://amzn.to/2WxBH1i – Leaders Eat Last: https://amzn.to/2xf70Ds – Start with Why: https://amzn.to/2WxBH1i This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 0:00 – Introduction 3:50 – Meaning of life as an infinite game 10:13 – Optimism 13:30 – Mortality 17:52 – Hard work 26:38 – Elon Musk, Steve Jobs, and leadership
-
#81 – Anca Dragan: Human-Robot Interaction and Reward EngineeringFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-19 17:33
Anca Dragan is a professor at Berkeley, working on human-robot interaction — algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings. Support this podcast by supporting the sponsors and using the special code: – Download Cash App on the App Store or Google Play & use code “LexPodcast” EPISODE LINKS: Anca’s Twitter: https://twitter.com/ancadianadragan Anca’s Website: https://people.eecs.berkeley.edu/~anca/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:26 – Interest in robotics 05:32 – Computer science 07:32 – Favorite robot 13:25 – How difficult is human-robot interaction? 32:01 – HRI application domains 34:24 – Optimizing the beliefs of humans 45:59 – Difficulty of driving when humans are involved 1:05:02 – Semi-autonomous driving 1:10:39 – How do we specify good rewards? 1:17:30 – Leaked information from human behavior 1:21:59 – Three laws of robotics 1:26:31 – Book recommendation 1:29:02 – If a doctor gave you 5 years to live… 1:32:48 – Small act of kindness 1:34:31 – Meaning of life
-
#80 – Vitalik Buterin: Ethereum, Cryptocurrency, and the Future of MoneyFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-16 17:48
Vitalik Buterin is co-creator of Ethereum and ether, which is a cryptocurrency that is currently the second-largest digital currency after bitcoin. Ethereum has a lot of interesting technical ideas that are defining the future of blockchain technology, and Vitalik is one of the most brilliant people innovating this space today. Support this podcast by supporting the sponsors with a special code: – Get ExpressVPN at https://www.expressvpn.com/lexpod – Sign up to MasterClass at https://masterclass.com/lex EPISODE LINKS: Vitalik blog: https://vitalik.ca Ethereum whitepaper: http://bit.ly/3cVDTpj Casper FFG (paper): http://bit.ly/2U6j7dJ Quadratic funding (paper): http://bit.ly/3aUZ8Wd Bitcoin whitepaper: https://bitcoin.org/bitcoin.pdf Mastering Ethereum (book): https://amzn.to/2xEjWmE This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 04:43 – Satoshi Nakamoto 08:40 – Anonymity 11:31 – Open source project leadership 13:04 – What is money? 30:02 – Blockchain and cryptocurrency basics 46:51 – Ethereum 59:23 – Proof of work 1:02:12 – Ethereum 2.0 1:13:09 – Beautiful ideas in Ethereum 1:16:59 – Future of cryptocurrency 1:22:06 – Cryptocurrency resources and people to follow 1:24:28 – Role of governments 1:27:27 – Meeting Putin 1:29:41 – Large number of cryptocurrencies 1:32:49 – Mortality
-
#79 – Lee Smolin: Quantum Gravity and Einstein’s Unfinished RevolutionFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-07 20:53
Lee Smolin is a theoretical physicist, co-inventor of loop quantum gravity, and a contributor of many interesting ideas to cosmology, quantum field theory, the foundations of quantum mechanics, theoretical biology, and the philosophy of science. He is the author of several books including one that critiques the state of physics and string theory called The Trouble with Physics, and his latest book, Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum. EPISODE LINKS: Books mentioned: – Einstein’s Unfinished Revolution by Lee Smolin: https://amzn.to/2TsF5c3 – The Trouble With Physics by Lee Smolin: https://amzn.to/2v1FMzy – Against Method by Paul Feyerabend: https://amzn.to/2VOPXCD This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 03:03 – What is real? 05:03 – Scientific method and scientific progress 24:57 – Eric Weinstein and radical ideas in science 29:32 – Quantum mechanics and general relativity 47:24 – Sean Carroll and many-worlds interpretation of quantum mechanics 55:33 – Principles in science 57:24 – String theory
-
#78 – Ann Druyan: Cosmos, Carl Sagan, Voyager, and the Beauty of ScienceFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-05 14:37
Ann Druyan is the writer, producer, director, and one of the most important and impactful communicators of science in our time. She co-wrote the 1980 science documentary series Cosmos hosted by Carl Sagan, whom she married in 1981, and her love for whom, with the help of NASA, was recorded as brain waves on a golden record along with other things our civilization has to offer and launched into space on the Voyager 1 and Voyager 2 spacecraft that are now, 42 years later, still active, reaching out farther into deep space than any human-made object ever has. This was a profound and beautiful decision she made as a Creative Director of NASA’s Voyager Interstellar Message Project. In 2014, she went on to create the second season of Cosmos, called Cosmos: A Spacetime Odyssey, and in 2020, the new third season called Cosmos: Possible Worlds, which is being released this upcoming Monday, March 9. It is hosted, once again, by the fun and brilliant Neil deGrasse Tyson. EPISODE LINKS: Cosmos Twitter: https://twitter.com/COSMOSonTV Cosmos Website: https://fox.tv/CosmosOnTV This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 03:24 – Role of science in society 07:04 – Love and science 09:07 – Skepticism in science 14:15 – Voyager, Carl Sagan, and the Golden Record 36:41 – Cosmos 53:22 – Existential threats 1:00:36 – Origin of life 1:04:22 – Mortality
-
#77 – Alex Garland: Ex Machina, Devs, Annihilation, and the Poetry of ScienceFrom 🇺🇸 Lex Fridman Podcast, published at 2020-03-03 16:07
Alex Garland is a writer and director of many imaginative and philosophical films from the dreamlike exploration of human self-destruction in the movie Annihilation to the deep questions of consciousness and intelligence raised in the movie Ex Machina, which to me is one of the greatest movies on artificial intelligence ever made. I’m releasing this podcast to coincide with the release of his new series called Devs that will premiere this Thursday, March 5, on Hulu. EPISODE LINKS: Devs: https://hulu.tv/2x35HaH Annihilation: https://hulu.tv/3ai9Eqk Ex Machina: https://www.netflix.com/title/80023689 Alex IMDb: https://www.imdb.com/name/nm0307497/ Alex Wiki: https://en.wikipedia.org/wiki/Alex_Garland This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 03:42 – Are we living in a dream? 07:15 – Aliens 12:34 – Science fiction: imagination becoming reality 17:29 – Artificial intelligence 22:40 – The new “Devs” series and the veneer of virtue in Silicon Valley 31:50 – Ex Machina and 2001: A Space Odyssey 44:58 – Lone genius 49:34 – Drawing inpiration from Elon Musk 51:24 – Space travel 54:03 – Free will 57:35 – Devs and the poetry of science 1:06:38 – What will you be remembered for?
-
#76 – John Hopfield: Physics View of the Mind and NeurobiologyFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-29 16:09
John Hopfield is professor at Princeton, whose life’s work weaved beautifully through biology, chemistry, neuroscience, and physics. Most crucially, he saw the messy world of biology through the piercing eyes of a physicist. He is perhaps best known for his work on associate neural networks, now known as Hopfield networks that were one of the early ideas that catalyzed the development of the modern field of deep learning. EPISODE LINKS: Now What? article: http://bit.ly/3843LeU John wikipedia: https://en.wikipedia.org/wiki/John_Hopfield Books mentioned: – Einstein’s Dreams: https://amzn.to/2PBa96X – Mind is Flat: https://amzn.to/2I3YB84 This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:35 – Difference between biological and artificial neural networks 08:49 – Adaptation 13:45 – Physics view of the mind 23:03 – Hopfield networks and associative memory 35:22 – Boltzmann machines 37:29 – Learning 39:53 – Consciousness 48:45 – Attractor networks and dynamical systems 53:14 – How do we build intelligent systems? 57:11 – Deep thinking as the way to arrive at breakthroughs 59:12 – Brain-computer interfaces 1:06:10 – Mortality 1:08:12 – Meaning of life
-
#75 – Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGIFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-26 17:45
Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning. EPISODE LINKS: Hutter Prize: http://prize.hutter1.net Marcus web: http://www.hutter1.net Books mentioned: – Universal AI: https://amzn.to/2waIAuw – AI: A Modern Approach: https://amzn.to/3camxnY – Reinforcement Learning: https://amzn.to/2PoANj9 – Theory of Knowledge: https://amzn.to/3a6Vp7x This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 03:32 – Universe as a computer 05:48 – Occam’s razor 09:26 – Solomonoff induction 15:05 – Kolmogorov complexity 20:06 – Cellular automata 26:03 – What is intelligence? 35:26 – AIXI – Universal Artificial Intelligence 1:05:24 – Where do rewards come from? 1:12:14 – Reward function for human existence 1:13:32 – Bounded rationality 1:16:07 – Approximation in AIXI 1:18:01 – Godel machines 1:21:51 – Consciousness 1:27:15 – AGI community 1:32:36 – Book recommendations 1:36:07 – Two moments to relive (past and future)
-
#74 – Michael I. Jordan: Machine Learning, Recommender Systems, and the Future of AIFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-24 13:46
Michael I. Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio. EPISODE LINKS: (Blog post) Artificial Intelligence—The Revolution Hasn’t Happened Yet This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 03:02 – How far are we in development of AI? 08:25 – Neuralink and brain-computer interfaces 14:49 – The term “artificial intelligence” 19:00 – Does science progress by ideas or personalities? 19:55 – Disagreement with Yann LeCun 23:53 – Recommender systems and distributed decision-making at scale 43:34 – Facebook, privacy, and trust 1:01:11 – Are human beings fundamentally good? 1:02:32 – Can a human life and society be modeled as an optimization problem? 1:04:27 – Is the world deterministic? 1:04:59 – Role of optimization in multi-agent systems 1:09:52 – Optimization of neural networks 1:16:08 – Beautiful idea in optimization: Nesterov acceleration 1:19:02 – What is statistics? 1:29:21 – What is intelligence? 1:37:01 – Advice for students 1:39:57 – Which language is more beautiful: English or French?
-
#73 – Andrew Ng: Deep Learning, Education, and Real-World AIFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-20 17:11
Andrew Ng is one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched deeplearning.ai, Landing.ai, and the AI fund, and was the Chief Scientist at Baidu. As a Stanford professor, and with Coursera and deeplearning.ai, he has helped educate and inspire millions of students including me. EPISODE LINKS: Andrew Twitter: https://twitter.com/AndrewYNg Andrew Facebook: https://www.facebook.com/andrew.ng.96 Andrew LinkedIn: https://www.linkedin.com/in/andrewyng/ deeplearning.ai: https://www.deeplearning.ai landing.ai: https://landing.ai AI Fund: https://aifund.ai/ AI for Everyone: https://www.coursera.org/learn/ai-for-everyone The Batch newsletter: https://www.deeplearning.ai/thebatch/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching “Ride Home” in your podcast app. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:23 – First few steps in AI 05:05 – Early days of online education 16:07 – Teaching on a whiteboard 17:46 – Pieter Abbeel and early research at Stanford 23:17 – Early days of deep learning 32:55 – Quick preview: deeplearning.ai, landing.ai, and AI fund 33:23 – deeplearning.ai: how to get started in deep learning 45:55 – Unsupervised learning 49:40 – deeplearning.ai (continued) 56:12 – Career in deep learning 58:56 – Should you get a PhD? 1:03:28 – AI fund – building startups 1:11:14 – Landing.ai – growing AI efforts in established companies 1:20:44 – Artificial general intelligence
-
#72 – Scott Aaronson: Quantum ComputingFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-17 21:21
Scott Aaronson is a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT. His research interests center around the capabilities and limits of quantum computers and computational complexity theory more generally. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching “Ride Home” in your podcast app. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 – Introduction 05:07 – Role of philosophy in science 29:27 – What is a quantum computer? 41:12 – Quantum decoherence (noise in quantum information) 49:22 – Quantum computer engineering challenges 51:00 – Moore’s Law 56:33 – Quantum supremacy 1:12:18 – Using quantum computers to break cryptography 1:17:11 – Practical application of quantum computers 1:22:18 – Quantum machine learning, questionable claims, and cautious optimism 1:30:53 – Meaning of life
-
Vladimir Vapnik: Predicates, Invariants, and the Essence of IntelligenceFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-14 17:22
Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 – Introduction 02:55 – Alan Turing: science and engineering of intelligence 09:09 – What is a predicate? 14:22 – Plato’s world of ideas and world of things 21:06 – Strong and weak convergence 28:37 – Deep learning and the essence of intelligence 50:36 – Symbolic AI and logic-based systems 54:31 – How hard is 2D image understanding? 1:00:23 – Data 1:06:39 – Language 1:14:54 – Beautiful idea in statistical theory of learning 1:19:28 – Intelligence and heuristics 1:22:23 – Reasoning 1:25:11 – Role of philosophy in learning theory 1:31:40 – Music (speaking in Russian) 1:35:08 – Mortality
-
Jim Keller: Moore’s Law, Microprocessors, Abstractions, and First PrinciplesFrom 🇺🇸 Lex Fridman Podcast, published at 2020-02-05 20:08
Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 – Introduction 02:12 – Difference between a computer and a human brain 03:43 – Computer abstraction layers and parallelism 17:53 – If you run a program multiple times, do you always get the same answer? 20:43 – Building computers and teams of people 22:41 – Start from scratch every 5 years 30:05 – Moore’s law is not dead 55:47 – Is superintelligence the next layer of abstraction? 1:00:02 – Is the universe a computer? 1:03:00 – Ray Kurzweil and exponential improvement in technology 1:04:33 – Elon Musk and Tesla Autopilot 1:20:51 – Lessons from working with Elon Musk 1:28:33 – Existential threats from AI 1:32:38 – Happiness and the meaning of life
-
David Chalmers: The Hard Problem of ConsciousnessFrom 🇺🇸 Lex Fridman Podcast, published at 2020-01-29 21:38
David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as “why does the feeling which accompanies awareness of sensory information exist at all?” This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 – Introduction 02:23 – Nature of reality: Are we living in a simulation? 19:19 – Consciousness in virtual reality 27:46 – Music-color synesthesia 31:40 – What is consciousness? 51:25 – Consciousness and the meaning of life 57:33 – Philosophical zombies 1:01:38 – Creating the illusion of consciousness 1:07:03 – Conversation with a clone 1:11:35 – Free will 1:16:35 – Meta-problem of consciousness 1:18:40 – Is reality an illusion? 1:20:53 – Descartes’ evil demon 1:23:20 – Does AGI need conscioussness? 1:33:47 – Exciting future 1:35:32 – Immortality
-
Cristos Goodrow: YouTube AlgorithmFrom 🇺🇸 Lex Fridman Podcast, published at 2020-01-25 19:33
Cristos Goodrow is VP of Engineering at Google and head of Search and Discovery at YouTube (aka YouTube Algorithm). This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 – Introduction 03:26 – Life-long trajectory through YouTube 07:30 – Discovering new ideas on YouTube 13:33 – Managing healthy conversation 23:02 – YouTube Algorithm 38:00 – Analyzing the content of video itself 44:38 – Clickbait thumbnails and titles 47:50 – Feeling like I’m helping the YouTube algorithm get smarter 50:14 – Personalization 51:44 – What does success look like for the algorithm? 54:32 – Effect of YouTube on society 57:24 – Creators 59:33 – Burnout 1:03:27 – YouTube algorithm: heuristics, machine learning, human behavior 1:08:36 – How to make a viral video? 1:10:27 – Veritasium: Why Are 96,000,000 Black Balls on This Reservoir? 1:13:20 – Making clips from long-form podcasts 1:18:07 – Moment-by-moment signal of viewer interest 1:20:04 – Why is video understanding such a difficult AI problem? 1:21:54 – Self-supervised learning on video 1:25:44 – What does YouTube look like 10, 20, 30 years from now?