-
Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-28 18:42
Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Episode Links: AI: A Guide for Thinking Humans (book) Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:33 - The term "artificial intelligence" 06:30 - Line between weak and strong AI 12:46 - Why have people dreamed of creating AI? 15:24 - Complex systems and intelligence 18:38 - Why are we bad at predicting the future with regard to AI? 22:05 - Are fundamental breakthroughs in AI needed? 25:13 - Different AI communities 31:28 - Copycat cognitive architecture 36:51 - Concepts and analogies 55:33 - Deep learning and the formation of concepts 1:09:07 - Autonomous vehicles 1:20:21 - Embodied AI and emotion 1:25:01 - Fear of superintelligent AI 1:36:14 - Good test for intelligence 1:38:09 - What is complexity? 1:43:09 - Santa Fe Institute 1:47:34 - Douglas Hofstadter 1:49:42 - Proudest moment
-
Jim Gates: Supersymmetry, String Theory and Proving Einstein Right
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-25 16:09
Jim Gates (S James Gates Jr.) is a theoretical physicist and professor at Brown University working on supersymmetry, supergravity, and superstring theory. He served on former President Obama's Council of Advisors on Science and Technology. He is the co-author of a new book titled Proving Einstein Right about the scientists who set out to prove Einstein's theory of relativity. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Episode Links: Proving Einstein Right (book) Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:13 - Will we ever venture outside our solar system? 05:16 - When will the first human step foot on Mars? 11:14 - Are we alone in the universe? 13:55 - Most beautiful idea in physics 16:29 - Can the mind be digitized? 21:15 - Does the possibility of superintelligence excite you? 22:25 - Role of dreaming in creativity and mathematical thinking 30:51 - Existential threats 31:46 - Basic particles underlying our universe 41:28 - What is supersymmetry? 52:19 - Adinkra symbols 1:00:24 - String theory 1:07:02 - Proving Einstein right and experimental validation of general relativity 1:19:07 - Richard Feynman 1:22:01 - Barack Obama's Council of Advisors on Science and Technology 1:30:20 - Exciting problems in physics that are just within our reach 1:31:26 - Mortality
-
Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-21 17:48
Sebastian Thrun is one of the greatest roboticists, computer scientists, and educators of our time. He led development of the autonomous vehicles at Stanford that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge. He then led the Google self-driving car program which launched the self-driving revolution. He taught the popular Stanford course on Artificial Intelligence in 2011 which was one of the first MOOCs. That experience led him to co-found Udacity, an online education platform. He is also the CEO of Kitty Hawk, a company working on building flying cars or more technically eVTOLS which stands for electric vertical take-off and landing aircraft. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:24 - The Matrix 04:39 - Predicting the future 30+ years ago 06:14 - Machine learning and expert systems 09:18 - How to pick what ideas to work on 11:27 - DARPA Grand Challenges 17:33 - What does it take to be a good leader? 23:44 - Autonomous vehicles 38:42 - Waymo and Tesla Autopilot 42:11 - Self-Driving Car Nanodegree 47:29 - Machine learning 51:10 - AI in medical applications 54:06 - AI-related job loss and education 57:51 - Teaching soft skills 1:00:13 - Kitty Hawk and flying cars 1:08:22 - Love and AI 1:13:12 - Life
-
Michael Stevens: Vsauce
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-17 14:11
Michael Stevens is the creator of Vsauce, one of the most popular educational YouTube channel in the world, with over 15 million subscribers and over 1.7 billion views. His videos often ask and answer questions that are both profound and entertaining, spanning topics from physics to psychology. As part of his channel he created 3 seasons of Mind Field, a series that explored human behavior. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Episode links: Vsauce YouTube: https://www.youtube.com/Vsauce Vsauce Twitter: https://twitter.com/tweetsauce Vsauce Instagram: https://www.instagram.com/electricpants/ Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:26 - Psychology 03:59 - Consciousness 06:55 - Free will 07:55 - Perception vs reality 09:59 - Simulation 11:32 - Science 16:24 - Flat earth 27:04 - Artificial Intelligence 30:14 - Existential threats 38:03 - Elon Musk and the responsibility of having a large following 43:05 - YouTube algorithm 52:41 - Mortality and the meaning of life
-
Rohit Prasad: Amazon Alexa and Conversational AI
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-14 15:02
Rohit Prasad is the vice president and head scientist of Amazon Alexa and one of its original creators. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". The episode is also supported by ZipRecruiter. Try it: http://ziprecruiter.com/lexpod Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 04:34 - Her 06:31 - Human-like aspects of smart assistants 08:39 - Test of intelligence 13:04 - Alexa prize 21:35 - What does it take to win the Alexa prize? 27:24 - Embodiment and the essence of Alexa 34:35 - Personality 36:23 - Personalization 38:49 - Alexa's backstory from her perspective 40:35 - Trust in Human-AI relations 44:00 - Privacy 47:45 - Is Alexa listening? 53:51 - How Alexa started 54:51 - Solving far-field speech recognition and intent understanding 1:11:51 - Alexa main categories of skills 1:13:19 - Conversation intent modeling 1:17:47 - Alexa memory and long-term learning 1:22:50 - Making Alexa sound more natural 1:27:16 - Open problems for Alexa and conversational AI 1:29:26 - Emotion recognition from audio and video 1:30:53 - Deep learning and reasoning 1:36:26 - Future of Alexa 1:41:47 - The big picture of conversational AI
-
Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-11 16:33
Judea Pearl is a professor at UCLA and a winner of the Turing Award, that's generally recognized as the Nobel Prize of computing. He is one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Bayesian Networks and profound ideas in causality in general. These ideas are important not just for AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lies at the core of what is currently missing and what must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:18 - Descartes and analytic geometry 06:25 - Good way to teach math 07:10 - From math to engineering 09:14 - Does God play dice? 10:47 - Free will 11:59 - Probability 22:21 - Machine learning 23:13 - Causal Networks 27:48 - Intelligent systems that reason with causation 29:29 - Do(x) operator 36:57 - Counterfactuals 44:12 - Reasoning by Metaphor 51:15 - Machine learning and causal reasoning 53:28 - Temporal aspect of causation 56:21 - Machine learning (continued) 59:15 - Human-level artificial intelligence 1:04:08 - Consciousness 1:04:31 - Concerns about AGI 1:09:53 - Religion and robotics 1:12:07 - Daniel Pearl 1:19:09 - Advice for students 1:21:00 - Legacy
-
Whitney Cummings: Comedy, Robotics, Neurology, and Love
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-05 12:41
Whitney Cummings is a stand-up comedian, actor, producer, writer, director, and the host of a new podcast called Good for You. Her most recent Netflix special called "Can I Touch It?" features in part a robot, she affectionately named Bearclaw, that is designed to be visually a replica of Whitney. It's exciting for me to see one of my favorite comedians explore the social aspects of robotics and AI in our society. She also has some fascinating ideas about human behavior, psychology, and neurology, some of which she explores in her book called "I'm Fine...And Other Lies." This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". The episode is also supported by ZipRecruiter. Try it: http://ziprecruiter.com/lexpod Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:51 - Eye contact 04:42 - Robot gender 08:49 - Whitney's robot (Bearclaw) 12:17 - Human reaction to robots 14:09 - Fear of robots 25:15 - Surveillance 29:35 - Animals 35:01 - Compassion from people who own robots 37:55 - Passion 44:57 - Neurology 56:38 - Social media 1:04:35 - Love 1:13:40 - Mortality
-
Ray Dalio: Principles, the Economic Machine, Artificial Intelligence & the Arc of Life
From 🇺🇸 Lex Fridman Podcast, published at 2019-12-02 17:09
Ray Dalio is the founder, Co-Chairman and Co-Chief Investment Officer of Bridgewater Associates, one of the world's largest and most successful investment firms that is famous for the principles of radical truth and transparency that underlie its culture. Ray is one of the wealthiest people in the world, with ideas that extend far beyond the specifics of how he made that wealth. His ideas, applicable to everyone, are brilliantly summarized in his book Principles. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:56 - Doing something that's never been done before 08:39 - Shapers 13:28 - A Players 15:09 - Confidence and disagreement 17:10 - Don't confuse dilusion with not knowing 24:38 - Idea meritocracy 27:39 - Is credit good for society? 32:59 - What is money? 37:13 - Bitcoin and digital currency 41:01 - The economic machine is amazing 46:24 - Principle for using AI 58:55 - Human irrationality 1:01:31 - Call for adventure at the edge of principles 1:03:26 - The line between madness and genius 1:04:30 - Automation 1:07:28 - American dream 1:14:02 - Can money buy happiness? 1:19:48 - Work-life balance and the arc of life 1:28:01 - Meaning of life
-
Noam Chomsky: Language, Cognition, and Deep Learning
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-29 15:11
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:59 - Common language with an alience species 05:46 - Structure of language 07:18 - Roots of language in our brain 08:51 - Language and thought 09:44 - The limit of human cognition 16:48 - Neuralink 19:32 - Deepest property of language 22:13 - Limits of deep learning 28:01 - Good and evil 29:52 - Memorable experiences 33:29 - Mortality 34:23 - Meaning of life
-
Gilbert Strang: Linear Algebra, Deep Learning, Teaching, and MIT OpenCourseWare
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-25 14:04
Gilbert Strang is a professor of mathematics at MIT and perhaps one of the most famous and impactful teachers of math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it, use code LexPodcast. And it is supported by ZipRecruiter. Try it: http://ziprecruiter.com/lexpod Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:45 - Math rockstar 05:10 - MIT OpenCourseWare 07:29 - Four Fundamental Subspaces of Linear Algebra 13:11 - Linear Algebra vs Calculus 15:03 - Singular value decomposition 19:47 - Why people like math 23:38 - Teaching by example 25:04 - Andrew Yang 26:46 - Society for Industrial and Applied Mathematics 29:21 - Deep learning 37:28 - Theory vs application 38:54 - Open problems in mathematics 39:00 - Linear algebra as a subfield of mathematics 41:52 - Favorite matrix 46:19 - Advice for students on their journey through math 47:37 - Looking back
-
Dava Newman: Space Exploration, Space Suits, and Life on Mars
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-22 18:14
Dava Newman is the Apollo Program professor of AeroAstro at MIT and the former Deputy Administrator of NASA and has been a principal investigator on four spaceflight missions. Her research interests are in aerospace biomedical engineering, investigating human performance in varying gravity environments. She has developed a space activity suit, namely the BioSuit, which would provide pressure through compression directly on the skin via the suit's textile weave, patterning, and materials rather than with pressurized gas. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is presented by Cash App. Download it, use code LexPodcast. You get $10 and $10 is donated to FIRST, one of my favorite nonprofit organizations that inspires young minds through robotics and STEM education. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 03:11 - Circumnavigating the globe by boat 05:11 - Exploration 07:17 - Life on Mars 11:07 - Intelligent life in the universe 12:25 - Advanced propulsion technology 13:32 - The Moon and NASA's Artemis program 19:17 - SpaceX 21:45 - Science on a CubeSat 23:45 - Reusable rockets 25:23 - Spacesuit of the future 32:01 - AI in Space 35:31 - Interplanetary species 36:57 - Future of space exploration
-
Michael Kearns: Algorithmic Fairness, Bias, Privacy, and Ethics in Machine Learning
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-19 17:52
Michael Kearns is a professor at University of Pennsylvania and a co-author of the new book Ethical Algorithm that is the focus of much of our conversation, including algorithmic fairness, bias, privacy, and ethics in general. But, that is just one of many fields that Michael is a world-class researcher in, some of which we touch on quickly including learning theory or theoretical foundations of machine learning, game theory, algorithmic trading, quantitative finance, computational social science, and more. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is sponsored by Pessimists Archive podcast. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 02:45 - Influence from literature and journalism 07:39 - Are most people good? 13:05 - Ethical algorithm 24:28 - Algorithmic fairness of groups vs individuals 33:36 - Fairness tradeoffs 46:29 - Facebook, social networks, and algorithmic ethics 58:04 - Machine learning 58:05 - Machine learning 59:19 - Algorithm that determines what is fair 1:01:25 - Computer scientists should think about ethics 1:05:59 - Algorithmic privacy 1:11:50 - Differential privacy 1:19:10 - Privacy by misinformation 1:22:31 - Privacy of data in society 1:27:49 - Game theory 1:29:40 - Nash equilibrium 1:30:35 - Machine learning and game theory 1:34:52 - Mutual assured destruction 1:36:56 - Algorithmic trading 1:44:09 - Pivotal moment in graduate school
-
Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-12 17:31
Elon Musk is the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. This is the second time Elon has been on the podcast. You can watch the first time on YouTube or listen to the first time on its episode page. You can read the transcript (PDF) here. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:57 - Consciousness 05:58 - Regulation of AI Safety 09:39 - Neuralink - understanding the human brain 11:53 - Neuralink - expanding the capacity of the human mind 17:51 - Neuralink - future challenges, solutions, and impact 24:59 - Smart Summon 27:18 - Tesla Autopilot and Full Self-Driving 31:16 - Carl Sagan and the Pale Blue Dot
-
Bjarne Stroustrup: C++
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-07 17:47
Bjarne Stroustrup is the creator of C++, a programming language that after 40 years is still one of the most popular and powerful languages in the world. Its focus on fast, stable, robust code underlies many of the biggest systems in the world that we have come to rely on as a society. If you're watching this on YouTube, many of the critical back-end component of YouTube are written in C++. Same goes for Google, Facebook, Amazon, Twitter, most Microsoft applications, Adobe applications, most database systems, and most physical systems that operate in the real-world like cars, robots, rockets that launch us into space and one day will land us on Mars. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:40 - First program 02:18 - Journey to C++ 16:45 - Learning multiple languages 23:20 - Javascript 25:08 - Efficiency and reliability in C++ 31:53 - What does good code look like? 36:45 - Static checkers 41:16 - Zero-overhead principle in C++ 50:00 - Different implementation of C++ 54:46 - Key features of C++ 1:08:02 - C++ Concepts 1:18:06 - C++ Standards Process 1:28:05 - Constructors and destructors 1:31:52 - Unified theory of programming 1:44:20 - Proudest moment
-
Sean Carroll: Quantum Mechanics and the Many-Worlds Interpretation
From 🇺🇸 Lex Fridman Podcast, published at 2019-11-01 16:50
Sean Carroll is a theoretical physicist at Caltech and Santa Fe Institute specializing in quantum mechanics, arrow of time, cosmology, and gravitation. He is the author of Something Deeply Hidden and several popular books and he is the host of a great podcast called Mindscape. This is the second time Sean has been on the podcast. You can watch the first time on YouTube or listen to the first time on its episode page. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:23 - Capacity of human mind to understand physics 10:49 - Perception vs reality 12:29 - Conservation of momentum 17:20 - Difference between math and physics 20:10 - Why is our world so compressable 22:53 - What would Newton think of quantum mechanics 25:44 - What is quantum mechanics? 27:54 - What is an atom? 30:34 - What is the wave function? 32:30 - What is quantum entanglement? 35:19 - What is Hilbert space? 37:32 - What is entropy? 39:31 - Infinity 42:43 - Many-worlds interpretation of quantum mechanics 1:01:13 - Quantum gravity and the emergence of spacetime 1:08:34 - Our branch of reality in many-worlds interpretation 1:10:40 - Time travel 1:12:54 - Arrow of time 1:16:18 - What is fundamental in physics 1:16:58 - Quantum computers 1:17:42 - Experimental validation of many-worlds and emergent spacetime 1:19:53 - Quantum mechanics and the human mind 1:21:51 - Mindscape podcast
-
Garry Kasparov: Chess, Deep Blue, AI, and Putin
From 🇺🇸 Lex Fridman Podcast, published at 2019-10-27 17:49
Garry Kasparov is considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, he dominated the chess world, ranking world number 1 for most of those 19 years. While he has many historic matches against human chess players, in the long arc of history he may be remembered for his match again a machine, IBM's Deep Blue. His initial victories and eventual loss to Deep Blue captivated the imagination of the world of what role Artificial Intelligence systems may play in our civilization's future. That excitement inspired an entire generation of AI researchers, including myself, to get into the field. Garry is also a pro-democracy political thinker and leader, a fearless human-rights activist, and author of several books including How Life Imitates Chess which is a book on strategy and decision-making, Winter Is Coming which is a book articulating his opposition to the Putin regime, and Deep Thinking which is a book the role of both artificial intelligence and human intelligence in defining our future. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:33 - Love of winning and hatred of losing 04:54 - Psychological elements 09:03 - Favorite games 16:48 - Magnus Carlsen 23:06 - IBM Deep Blue 37:39 - Morality 38:59 - Autonomous vehicles 42:03 - Fall of the Soviet Union 45:50 - Putin 52:25 - Life
-
Michio Kaku: Future of Humans, Aliens, Space Travel & Physics
From 🇺🇸 Lex Fridman Podcast, published at 2019-10-22 14:26
Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:14 - Contact with Aliens in the 21st century 06:36 - Multiverse and Nirvana 09:46 - String Theory 11:07 - Einstein's God 15:01 - Would aliens hurt us? 17:34 - What would aliens look like? 22:13 - Brain-machine interfaces 27:35 - Existential risk from AI 30:22 - Digital immortality 34:02 - Biological immortality 37:42 - Does mortality give meaning? 43:42 - String theory 47:16 - Universe as a computer and a simulation 53:16 - First human on Mars
-
David Ferrucci: IBM Watson, Jeopardy & Deep Conversations with AI
From 🇺🇸 Lex Fridman Podcast, published at 2019-10-11 16:46
David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:06 - Biological vs computer systems 08:03 - What is intelligence? 31:49 - Knowledge frameworks 52:02 - IBM Watson winning Jeopardy 1:24:21 - Watson vs human difference in approach 1:27:52 - Q&A; vs dialogue 1:35:22 - Humor 1:41:33 - Good test of intelligence 1:46:36 - AlphaZero, AlphaStar accomplishments 1:51:29 - Explainability, induction, deduction in medical diagnosis 1:59:34 - Grand challenges 2:04:03 - Consciousness 2:08:26 - Timeline for AGI 2:13:55 - Embodied AI 2:17:07 - Love and companionship 2:18:06 - Concerns about AI 2:21:56 - Discussion with AGI
-
Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI
From 🇺🇸 Lex Fridman Podcast, published at 2019-10-03 11:26
Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:37 - Singularity 05:48 - Physical and psychological knowledge 10:52 - Chess 14:32 - Language vs physical world 17:37 - What does AI look like 100 years from now 21:28 - Flaws of the human mind 25:27 - General intelligence 28:25 - Limits of deep learning 44:41 - Expert systems and symbol manipulation 48:37 - Knowledge representation 52:52 - Increasing compute power 56:27 - How human children learn 57:23 - Innate knowledge and learned knowledge 1:06:43 - Good test of intelligence 1:12:32 - Deep learning and symbol manipulation 1:23:35 - Guitar
-
Peter Norvig: Artificial Intelligence: A Modern Approach
From 🇺🇸 Lex Fridman Podcast, published at 2019-09-30 17:44
Peter Norvig is a research director at Google and the co-author with Stuart Russell of the book Artificial Intelligence: A Modern Approach that educated and inspired a whole generation of researchers including myself to get into the field. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 00:37 - Artificial Intelligence: A Modern Approach 09:11 - Covering the entire field of AI 15:42 - Expert systems and knowledge representation 18:31 - Explainable AI 23:15 - Trust 25:47 - Education - Intro to AI - MOOC 32:43 - Learning to program in 10 years 37:12 - Changing nature of mastery 40:01 - Code review 41:17 - How have you changed as a programmer 43:05 - LISP 47:41 - Python 48:32 - Early days of Google Search 53:24 - What does it take to build human-level intelligence 55:14 - Her 57:00 - Test of intelligence 58:41 - Future threats from AI 1:00:58 - Exciting open problems in AI