
Lex Fridman Podcast (Lex Fridman)
Explore every episode of Lex Fridman Podcast
Dive into the complete episode list for Lex Fridman Podcast. Each episode is cataloged with detailed descriptions, making it easy to find and explore specific topics. Keep track of all episodes from your favorite podcast and never miss a moment of insightful content.
Pub. Date | Title | Duration | |
---|---|---|---|
26 Aug 2018 | Max Tegmark: Life 3.0 | 01:22:38 | |
A conversation with Max Tegmark as part of MIT course on Artificial General Intelligence. Video version is available on YouTube. He is a Physics Professor at MIT, co-founder of the Future of Life Institute, and author of "Life 3.0: Being Human in the Age of Artificial Intelligence." If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
02 Sep 2018 | Christof Koch: Consciousness | 00:59:37 | |
A conversation with Christof Koch as part of MIT course on Artificial General Intelligence. Video version is available on YouTube. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. From 1986 until 2013, he was a professor at CalTech. Cited more than 105,000 times. Author of several books including "Consciousness: Confessions of a Romantic Reductionist." If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
17 Oct 2018 | Steven Pinker: AI in the Age of Reason | 00:38:23 | |
Steven Pinker is a professor at Harvard and before that was a professor at MIT. He is the author of many books, several of which have had a big impact on the way I see the world for the better. In particular, The Better Angels of Our Nature and Enlightenment Now have instilled in me a sense of optimism grounded in data, science, and reason. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
20 Oct 2018 | Yoshua Bengio: Deep Learning | 00:42:54 | |
Yoshua Bengio, along with Geoffrey Hinton and Yann Lecun, is considered one of the three people most responsible for the advancement of deep learning during the 1990s, 2000s, and now. Cited 139,000 times, he has been integral to some of the biggest breakthroughs in AI over the past 3 decades. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
16 Nov 2018 | Vladimir Vapnik: Statistical Learning | 00:54:12 | |
Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. His work has been cited over 170,000 times. He has some very interesting ideas about artificial intelligence and the nature of learning, especially on the limits of our current approaches and the open problems in the field. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
22 Nov 2018 | Guido van Rossum: Python | 01:26:51 | |
Guido van Rossum is the creator of Python, one of the most popular and impactful programming languages in the world. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
29 Nov 2018 | Jeff Atwood: Stack Overflow and Coding Horror | 01:20:15 | |
Jeff Atwood is a co-founder of Stack Overflow and Stack Exchange, websites that are visited by millions of people every day. Much like with Wikipedia, it is difficult to understate the impact on global knowledge and productivity that these network of sites have created. Jeff is also the author of the famed Coding Horror blog, and the founder of Discourse, and open-source software project that seeks to improve the quality of our online community discussions. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
04 Dec 2018 | Eric Schmidt: Google | 00:33:18 | |
Eric Schmidt was the CEO of Google from 2001 to 2011, and its executive chairman from 2011 to 2017, guiding the company through a period of incredible growth and a series of world-changing innovations. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
09 Dec 2018 | Stuart Russell: Long-Term Future of AI | 01:26:27 | |
Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
16 Dec 2018 | Pieter Abbeel: Deep Reinforcement Learning | 00:42:56 | |
Pieter Abbeel is a professor at UC Berkeley, director of the Berkeley Robot Learning Lab, and is one of the top researchers in the world working on how to make robots understand and interact with the world around them, especially through imitation and deep reinforcement learning. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
23 Dec 2018 | Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | 01:20:06 | |
Juergen Schmidhuber is the co-creator of long short-term memory networks (LSTMs) which are used in billions of devices today for speech recognition, translation, and much more. Over 30 years, he has proposed a lot of interesting, out-of-the-box ideas in artificial intelligence including a formal theory of creativity. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
28 Dec 2018 | Tuomas Sandholm: Poker and Game Theory | 01:06:26 | |
Tuomas Sandholm is a professor at CMU and co-creator of Libratus, which is the first AI system to beat top human players at the game of Heads-Up No-Limit Texas Hold'em. He has published over 450 papers on game theory and machine learning, including a best paper in 2017 at NIPS / NeurIPS. His research and companies have had wide-reaching impact in the real world, especially because he and his group not only propose new ideas, but also build systems to prove these ideas work in the real world. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
19 Jan 2019 | Tomaso Poggio: Brains, Minds, and Machines | 01:20:29 | |
Tomaso Poggio is a professor at MIT and is the director of the Center for Brains, Minds, and Machines. Cited over 100,000 times, his work has had a profound impact on our understanding of the nature of intelligence, in both biological neural networks and artificial ones. He has been an advisor to many highly-impactful researchers and entrepreneurs in AI, including Demis Hassabis of DeepMind, Amnon Shashua of MobileEye, and Christof Koch of the Allen Institute for Brain Science. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
07 Feb 2019 | Kyle Vogt: Cruise Automation | 00:55:34 | |
Kyle Vogt is the President and CTO of Cruise Automation, leading an effort in trying to solve one of the biggest robotics challenges of our time: vehicle autonomy. He is the co-founder of 2 successful companies (Cruise and Twitch) that were each acquired for 1 billion dollars. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations. | |||
12 Mar 2019 | Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | 01:01:34 | |
Leslie Kaelbling is a roboticist and professor at MIT. She is recognized for her work in reinforcement learning, planning, robot navigation, and several other topics in AI. She won the IJCAI Computers and Thought Award and was the editor-in-chief of the prestigious Journal of Machine Learning Research. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
20 Mar 2019 | Eric Weinstein: Revolutionary Ideas in Science, Math, and Society | 01:21:45 | |
Eric Weinstein is a mathematician, economist, physicist, and managing director of Thiel Capital. He formed the "intellectual dark web" which is a loosely assembled group of public intellectuals including Sam Harris, Jordan Peterson, Steven Pinker, Joe Rogan, Michael Shermer, and a few others. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
03 Apr 2019 | Greg Brockman: OpenAI and AGI | 01:25:15 | |
Greg Brockman is the Co-Founder and CTO of OpenAI, a research organization developing ideas in AI that lead eventually to a safe & friendly artificial general intelligence that benefits and empowers humanity. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
12 Apr 2019 | Elon Musk: Tesla Autopilot | 00:32:58 | |
Elon Musk is the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
18 Apr 2019 | Ian Goodfellow: Generative Adversarial Networks (GANs) | 01:08:47 | |
Ian Goodfellow is the author of the popular textbook on deep learning (simply titled "Deep Learning"). He coined the term Generative Adversarial Networks (GANs) and with his 2014 paper is responsible for launching the incredible growth of research on GANs. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
29 Apr 2019 | Oriol Vinyals: DeepMind AlphaStar, StarCraft, Language, and Sequences | 01:46:07 | |
Oriol Vinyals is a senior research scientist at Google DeepMind. Before that he was at Google Brain and Berkeley. His research has been cited over 39,000 times. He is one of the most brilliant and impactful minds in the field of deep learning. He is behind some of the biggest papers and ideas in AI, including sequence to sequence learning, audio generation, image captioning, neural machine translation, and reinforcement learning. He is a co-lead (with David Silver) of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
13 May 2019 | Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | 01:13:15 | |
Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
03 Jun 2019 | Rajat Monga: TensorFlow | 01:11:07 | |
Rajat Monga is an Engineering Director at Google, leading the TensorFlow team. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
10 Jun 2019 | Gavin Miller: Adobe Research | 01:09:20 | |
Gavin Miller is the Head of Adobe Research. Adobe have empowered artists, designers, and creative minds from all professions working in the digital medium for over 30 years with software such as Photoshop, Illustrator, Premiere, After Effects, InDesign, Audition that work with images, video, and audio. Adobe Research is working to define the future evolution of these products in a way that makes the life of creatives easier, automates the tedious tasks, and gives more & more time to operate in the idea space instead of pixel space. This is where the cutting-edge deep learning methods of the past decade can shine more than perhaps any other application. Gavin is the embodiment of combing tech and creativity. Outside of Adobe Research, he writes poetry & builds robots. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
17 Jun 2019 | Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | 01:00:21 | |
Rosalind Picard is a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of two companies, Affectiva and Empatica. Over two decades ago she launched the field of affective computing with her book of the same name. This book described the importance of emotion in artificial and natural intelligence, the vital role emotion communication has to relationships between people in general and in human-robot interaction. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
01 Jul 2019 | Jeff Hawkins: Thousand Brains Theory of Intelligence | 02:09:45 | |
Jeff Hawkins is the founder of Redwood Center for Theoretical Neuroscience in 2002 and Numenta in 2005. In his 2004 book titled On Intelligence, and in his research before and after, he and his team have worked to reverse-engineer the neocortex and propose artificial intelligence architectures, approaches, and ideas that are inspired by the human brain. These ideas include Hierarchical Temporal Memory (HTM) from 2004 and The Thousand Brains Theory of Intelligence from 2017. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
10 Jul 2019 | Sean Carroll: The Nature of the Universe, Life, and Intelligence | 00:35:02 | |
Sean Carroll is a theoretical physicist at Caltech, specializing in quantum mechanics, gravity, and cosmology. He is the author of several popular books: one on the arrow of time called From Eternity to Here, one on the Higgs boson called The Particle at the End of the Universe, and one on science and philosophy called The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. He has an upcoming book on Quantum Mechanics that you can preorder now called Something Deeply Hidden. Finally, and perhaps most famously, he is the host of a podcast called Mindscape that you should subscribe to and support on Patreon. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. | |||
15 Jul 2019 | Kai-Fu Lee: AI Superpowers – China and Silicon Valley | 01:26:35 | |
Kai-Fu Lee is the Chairman and CEO of Sinovation Ventures that manages a 2 billion dollar dual currency investment fund with a focus on developing the next generation of Chinese high-tech companies. He is the former President of Google China and the founder of what is now called Microsoft Research Asia, an institute that trained many of the AI leaders in China, including CTOs or AI execs at Baidu, Tencent, Alibaba, Lenovo, and Huawei. He was named one of the 100 most influential people in the world by TIME Magazine. He is the author of seven best-selling books in Chinese, and most recently the New York Times best seller called AI Superpowers: China, Silicon Valley, and the New World Order. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
22 Jul 2019 | Chris Urmson: Self-Driving Cars at Aurora, Google, CMU, and DARPA | 00:44:59 | |
Chris Urmson was the CTO of the Google Self-Driving Car team, a key engineer and leader behind the Carnegie Mellon autonomous vehicle entries in the DARPA grand challenges and the winner of the DARPA urban challenge. Today he is the CEO of Aurora Innovation, an autonomous vehicle software company he started with Sterling Anderson, who was the former director of Tesla Autopilot, and Drew Bagnell, Uber's former autonomy and perception lead. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
29 Jul 2019 | Gustav Soderstrom: Spotify | 01:47:10 | |
Gustav Soderstrom is the Chief Research & Development Officer at Spotify, leading Product, Design, Data, Technology & Engineering teams. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
01 Aug 2019 | Kevin Scott: Microsoft CTO | 00:57:55 | |
Kevin Scott is the CTO of Microsoft. Before that, he was the Senior Vice President of Engineering and Operations at LinkedIn. And before that, he oversaw mobile ads engineering at Google. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
05 Aug 2019 | George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | 01:59:41 | |
George Hotz is the founder of Comma.ai, a machine learning based vehicle automation company. He is an outspoken personality in the field of AI and technology in general. He first gained recognition for being the first person to carrier-unlock an iPhone, and since then has done quite a few interesting things at the intersection of hardware and software. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
12 Aug 2019 | Paola Arlotta: Brain Development from Stem Cell to Organoid | 00:57:52 | |
Paola Arlotta is a professor of stem cell and regenerative biology at Harvard University. She is interested in understanding the molecular laws that govern the birth, differentiation and assembly of the human brain’s cerebral cortex. She explores the complexity of the brain by studying and engineering elements of how the brain develops. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
19 Aug 2019 | Keoki Jackson: Lockheed Martin | 01:13:15 | |
Keoki Jackson is the CTO of Lockheed Martin, a company that through its long history has created some of the most incredible engineering marvels that human beings have ever built, including planes that fly fast and undetected, defense systems that intersect threats that could take the lives of millions in the case of nuclear weapons, and spacecraft systems that venture out into space, the moon, Mars, and beyond with and without humans on-board. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
23 Aug 2019 | Pamela McCorduck: Machines Who Think and the Early Days of AI | 01:00:17 | |
Pamela McCorduck is an author who has written on the history and philosophical significance of artificial intelligence, the future of engineering, and the role of women and technology. Her books include Machines Who Think in 1979, The Fifth Generation in 1983 with Ed Feigenbaum who is considered to be the father of expert systems, the Edge of Chaos, The Futures of Women, and more. Through her literary work, she has spent a lot of time with the seminal figures of artificial intelligence, includes the founding fathers of AI from the 1956 Dartmouth summer workshop where the field was launched. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
27 Aug 2019 | Jeremy Howard: fast.ai Deep Learning Courses and Research | 01:44:17 | |
Jeremy Howard is the founder of fast.ai, a research institute dedicated to make deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, a former president of Kaggle as well a top-ranking competitor there, and in general, he's a successful entrepreneur, educator, research, and an inspiring personality in the AI community. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
31 Aug 2019 | Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning | 01:16:07 | |
Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founder of convolutional neural networks, in particular their early application to optical character recognition. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
08 Sep 2019 | Vijay Kumar: Flying Robots | 00:56:57 | |
Vijay Kumar is one of the top roboticists in the world, professor at the University of Pennsylvania, Dean of Penn Engineering, former director of GRASP lab, or the General Robotics, Automation, Sensing and Perception Laboratory at Penn that was established back in 1979, 40 years ago. Vijay is perhaps best known for his work in multi-robot systems (or robot swarms) and micro aerial vehicles, robots that elegantly cooperate in flight under all the uncertainty and challenges that real-world conditions present. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
14 Sep 2019 | François Chollet: Keras, Deep Learning, and the Progress of AI | 00:56:57 | |
François Chollet is the creator of Keras, which is an open source deep learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class AI researcher and software engineer at Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of artificial intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
19 Sep 2019 | Colin Angle: iRobot | 00:37:51 | |
Colin Angle is the CEO and co-founder of iRobot, a robotics company that for 29 years has been creating robots that operate successfully in the real world, not as a demo or on a scale of dozens, but on a scale of thousands and millions. As of this year, iRobot has sold more than 25 million robots to consumers, including the Roomba vacuum cleaning robot, the Braava floor mopping robot, and soon the Terra lawn mowing robot. 25 million robots successfully operating autonomously in people's homes to me is an incredible accomplishment of science, engineering, logistics, and all kinds of entrepreneurial innovation. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
23 Sep 2019 | Regina Barzilay: Deep Learning for Cancer Diagnosis and Treatment | 01:17:38 | |
Regina Barzilay is a professor at MIT and a world-class researcher in natural language processing and applications of deep learning to chemistry and oncology, or the use of deep learning for early diagnosis, prevention and treatment of cancer. She has also been recognized for her teaching of several successful AI-related courses at MIT, including the popular Introduction to Machine Learning course. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. | |||
26 Sep 2019 | Leonard Susskind: Quantum Mechanics, String Theory, and Black Holes | 00:57:40 | |
Leonard Susskind is a professor of theoretical physics at Stanford University, and founding director of the Stanford Institute for Theoretical Physics. He is widely regarded as one of the fathers of string theory and in general as one of the greatest physicists of our time both as a researcher and an educator. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:02 - Richard Feynman
02:09 - Visualization and intuition
06:45 - Ego in Science
09:27 - Academia
11:18 - Developing ideas
12:12 - Quantum computers
21:37 - Universe as an information processing system
26:35 - Machine learning
29:47 - Predicting the future
30:48 - String theory
37:03 - Free will
39:26 - Arrow of time
46:39 - Universe as a computer
49:45 - Big bang
50:50 - Infinity
51:35 - First image of a black hole
54:08 - Questions within the reach of science
55:55 - Questions out of reach of science | |||
30 Sep 2019 | Peter Norvig: Artificial Intelligence: A Modern Approach | 01:03:22 | |
Peter Norvig is a research director at Google and the co-author with Stuart Russell of the book Artificial Intelligence: A Modern Approach that educated and inspired a whole generation of researchers including myself to get into the field. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
00:37 - Artificial Intelligence: A Modern Approach
09:11 - Covering the entire field of AI
15:42 - Expert systems and knowledge representation
18:31 - Explainable AI
23:15 - Trust
25:47 - Education - Intro to AI - MOOC
32:43 - Learning to program in 10 years
37:12 - Changing nature of mastery
40:01 - Code review
41:17 - How have you changed as a programmer
43:05 - LISP
47:41 - Python
48:32 - Early days of Google Search
53:24 - What does it take to build human-level intelligence
55:14 - Her
57:00 - Test of intelligence
58:41 - Future threats from AI
1:00:58 - Exciting open problems in AI | |||
03 Oct 2019 | Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | 01:25:09 | |
Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:37 - Singularity
05:48 - Physical and psychological knowledge
10:52 - Chess
14:32 - Language vs physical world
17:37 - What does AI look like 100 years from now
21:28 - Flaws of the human mind
25:27 - General intelligence
28:25 - Limits of deep learning
44:41 - Expert systems and symbol manipulation
48:37 - Knowledge representation
52:52 - Increasing compute power
56:27 - How human children learn
57:23 - Innate knowledge and learned knowledge
1:06:43 - Good test of intelligence
1:12:32 - Deep learning and symbol manipulation
1:23:35 - Guitar | |||
11 Oct 2019 | David Ferrucci: IBM Watson, Jeopardy & Deep Conversations with AI | 02:24:33 | |
David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:06 - Biological vs computer systems
08:03 - What is intelligence?
31:49 - Knowledge frameworks
52:02 - IBM Watson winning Jeopardy
1:24:21 - Watson vs human difference in approach
1:27:52 - Q&A vs dialogue
1:35:22 - Humor
1:41:33 - Good test of intelligence
1:46:36 - AlphaZero, AlphaStar accomplishments
1:51:29 - Explainability, induction, deduction in medical diagnosis
1:59:34 - Grand challenges
2:04:03 - Consciousness
2:08:26 - Timeline for AGI
2:13:55 - Embodied AI
2:17:07 - Love and companionship
2:18:06 - Concerns about AI
2:21:56 - Discussion with AGI | |||
22 Oct 2019 | Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | 01:01:10 | |
Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:14 - Contact with Aliens in the 21st century
06:36 - Multiverse and Nirvana
09:46 - String Theory
11:07 - Einstein's God
15:01 - Would aliens hurt us?
17:34 - What would aliens look like?
22:13 - Brain-machine interfaces
27:35 - Existential risk from AI
30:22 - Digital immortality
34:02 - Biological immortality
37:42 - Does mortality give meaning?
43:42 - String theory
47:16 - Universe as a computer and a simulation
53:16 - First human on Mars | |||
27 Oct 2019 | Garry Kasparov: Chess, Deep Blue, AI, and Putin | 00:55:34 | |
Garry Kasparov is considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, he dominated the chess world, ranking world number 1 for most of those 19 years. While he has many historic matches against human chess players, in the long arc of history he may be remembered for his match again a machine, IBM's Deep Blue. His initial victories and eventual loss to Deep Blue captivated the imagination of the world of what role Artificial Intelligence systems may play in our civilization's future. That excitement inspired an entire generation of AI researchers, including myself, to get into the field. Garry is also a pro-democracy political thinker and leader, a fearless human-rights activist, and author of several books including How Life Imitates Chess which is a book on strategy and decision-making, Winter Is Coming which is a book articulating his opposition to the Putin regime, and Deep Thinking which is a book the role of both artificial intelligence and human intelligence in defining our future. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:33 - Love of winning and hatred of losing
04:54 - Psychological elements
09:03 - Favorite games
16:48 - Magnus Carlsen
23:06 - IBM Deep Blue
37:39 - Morality
38:59 - Autonomous vehicles
42:03 - Fall of the Soviet Union
45:50 - Putin
52:25 - Life | |||
01 Nov 2019 | Sean Carroll: Quantum Mechanics and the Many-Worlds Interpretation | 01:30:06 | |
Sean Carroll is a theoretical physicist at Caltech and Santa Fe Institute specializing in quantum mechanics, arrow of time, cosmology, and gravitation. He is the author of Something Deeply Hidden and several popular books and he is the host of a great podcast called Mindscape. This is the second time Sean has been on the podcast. You can watch the first time on YouTube or listen to the first time on its episode page. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:23 - Capacity of human mind to understand physics
10:49 - Perception vs reality
12:29 - Conservation of momentum
17:20 - Difference between math and physics
20:10 - Why is our world so compressable
22:53 - What would Newton think of quantum mechanics
25:44 - What is quantum mechanics?
27:54 - What is an atom?
30:34 - What is the wave function?
32:30 - What is quantum entanglement?
35:19 - What is Hilbert space?
37:32 - What is entropy?
39:31 - Infinity
42:43 - Many-worlds interpretation of quantum mechanics
1:01:13 - Quantum gravity and the emergence of spacetime
1:08:34 - Our branch of reality in many-worlds interpretation
1:10:40 - Time travel
1:12:54 - Arrow of time
1:16:18 - What is fundamental in physics
1:16:58 - Quantum computers
1:17:42 - Experimental validation of many-worlds and emergent spacetime
1:19:53 - Quantum mechanics and the human mind
1:21:51 - Mindscape podcast | |||
07 Nov 2019 | Bjarne Stroustrup: C++ | 01:47:19 | |
Bjarne Stroustrup is the creator of C++, a programming language that after 40 years is still one of the most popular and powerful languages in the world. Its focus on fast, stable, robust code underlies many of the biggest systems in the world that we have come to rely on as a society. If you're watching this on YouTube, many of the critical back-end component of YouTube are written in C++. Same goes for Google, Facebook, Amazon, Twitter, most Microsoft applications, Adobe applications, most database systems, and most physical systems that operate in the real-world like cars, robots, rockets that launch us into space and one day will land us on Mars.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:40 - First program
02:18 - Journey to C++
16:45 - Learning multiple languages
23:20 - Javascript
25:08 - Efficiency and reliability in C++
31:53 - What does good code look like?
36:45 - Static checkers
41:16 - Zero-overhead principle in C++
50:00 - Different implementation of C++
54:46 - Key features of C++
1:08:02 - C++ Concepts
1:18:06 - C++ Standards Process
1:28:05 - Constructors and destructors
1:31:52 - Unified theory of programming
1:44:20 - Proudest moment | |||
12 Nov 2019 | Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | 00:36:22 | |
Elon Musk is the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. This is the second time Elon has been on the podcast. You can watch the first time on YouTube or listen to the first time on its episode page. You can read the transcript (PDF) here. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
01:57 - Consciousness
05:58 - Regulation of AI Safety
09:39 - Neuralink - understanding the human brain
11:53 - Neuralink - expanding the capacity of the human mind
17:51 - Neuralink - future challenges, solutions, and impact
24:59 - Smart Summon
27:18 - Tesla Autopilot and Full Self-Driving
31:16 - Carl Sagan and the Pale Blue Dot | |||
19 Nov 2019 | Michael Kearns: Algorithmic Fairness, Bias, Privacy, and Ethics in Machine Learning | 01:49:01 | |
Michael Kearns is a professor at University of Pennsylvania and a co-author of the new book Ethical Algorithm that is the focus of much of our conversation, including algorithmic fairness, bias, privacy, and ethics in general. But, that is just one of many fields that Michael is a world-class researcher in, some of which we touch on quickly including learning theory or theoretical foundations of machine learning, game theory, algorithmic trading, quantitative finance, computational social science, and more.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is sponsored by Pessimists Archive podcast. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):
00:00 - Introduction
02:45 - Influence from literature and journalism
07:39 - Are most people good?
13:05 - Ethical algorithm
24:28 - Algorithmic fairness of groups vs individuals
33:36 - Fairness tradeoffs
46:29 - Facebook, social networks, and algorithmic ethics
58:04 - Machine learning
58:05 - Machine learning
59:19 - Algorithm that determines what is fair
1:01:25 - Computer scientists should think about ethics
1:05:59 - Algorithmic privacy
1:11:50 - Differential privacy
1:19:10 - Privacy by misinformation
1:22:31 - Privacy of data in society
1:27:49 - Game theory
1:29:40 - Nash equilibrium
1:30:35 - Machine learning and game theory
1:34:52 - Mutual assured destruction
1:36:56 - Algorithmic trading
1:44:09 - Pivotal moment in graduate school | |||
22 Nov 2019 | Dava Newman: Space Exploration, Space Suits, and Life on Mars | 00:39:45 | |
Dava Newman is the Apollo Program professor of AeroAstro at MIT and the former Deputy Administrator of NASA and has been a principal investigator on four spaceflight missions. Her research interests are in aerospace biomedical engineering, investigating human performance in varying gravity environments. She has developed a space activity suit, namely the BioSuit, which would provide pressure through compression directly on the skin via the suit's textile weave, patterning, and materials rather than with pressurized gas.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it, use code LexPodcast. You get $10 and $10 is donated to FIRST, one of my favorite nonprofit organizations that inspires young minds through robotics and STEM education.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:11 - Circumnavigating the globe by boat
05:11 - Exploration
07:17 - Life on Mars
11:07 - Intelligent life in the universe
12:25 - Advanced propulsion technology
13:32 - The Moon and NASA's Artemis program
19:17 - SpaceX
21:45 - Science on a CubeSat
23:45 - Reusable rockets
25:23 - Spacesuit of the future
32:01 - AI in Space
35:31 - Interplanetary species
36:57 - Future of space exploration | |||
25 Nov 2019 | Gilbert Strang: Linear Algebra, Deep Learning, Teaching, and MIT OpenCourseWare | 00:50:16 | |
Gilbert Strang is a professor of mathematics at MIT and perhaps one of the most famous and impactful teachers of math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it, use code LexPodcast.
And it is supported by ZipRecruiter. Try it: http://ziprecruiter.com/lexpod
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:45 - Math rockstar
05:10 - MIT OpenCourseWare
07:29 - Four Fundamental Subspaces of Linear Algebra
13:11 - Linear Algebra vs Calculus
15:03 - Singular value decomposition
19:47 - Why people like math
23:38 - Teaching by example
25:04 - Andrew Yang
26:46 - Society for Industrial and Applied Mathematics
29:21 - Deep learning
37:28 - Theory vs application
38:54 - Open problems in mathematics
39:00 - Linear algebra as a subfield of mathematics
41:52 - Favorite matrix
46:19 - Advice for students on their journey through math
47:37 - Looking back | |||
29 Nov 2019 | Noam Chomsky: Language, Cognition, and Deep Learning | 00:36:10 | |
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:59 - Common language with an alience species
05:46 - Structure of language
07:18 - Roots of language in our brain
08:51 - Language and thought
09:44 - The limit of human cognition
16:48 - Neuralink
19:32 - Deepest property of language
22:13 - Limits of deep learning
28:01 - Good and evil
29:52 - Memorable experiences
33:29 - Mortality
34:23 - Meaning of life | |||
02 Dec 2019 | Ray Dalio: Principles, the Economic Machine, Artificial Intelligence & the Arc of Life | 01:30:39 | |
Ray Dalio is the founder, Co-Chairman and Co-Chief Investment Officer of Bridgewater Associates, one of the world's largest and most successful investment firms that is famous for the principles of radical truth and transparency that underlie its culture. Ray is one of the wealthiest people in the world, with ideas that extend far beyond the specifics of how he made that wealth. His ideas, applicable to everyone, are brilliantly summarized in his book Principles.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:56 - Doing something that's never been done before
08:39 - Shapers
13:28 - A Players
15:09 - Confidence and disagreement
17:10 - Don't confuse dilusion with not knowing
24:38 - Idea meritocracy
27:39 - Is credit good for society?
32:59 - What is money?
37:13 - Bitcoin and digital currency
41:01 - The economic machine is amazing
46:24 - Principle for using AI
58:55 - Human irrationality
1:01:31 - Call for adventure at the edge of principles
1:03:26 - The line between madness and genius
1:04:30 - Automation
1:07:28 - American dream
1:14:02 - Can money buy happiness?
1:19:48 - Work-life balance and the arc of life
1:28:01 - Meaning of life | |||
05 Dec 2019 | Whitney Cummings: Comedy, Robotics, Neurology, and Love | 01:17:18 | |
Whitney Cummings is a stand-up comedian, actor, producer, writer, director, and the host of a new podcast called Good for You. Her most recent Netflix special called "Can I Touch It?" features in part a robot, she affectionately named Bearclaw, that is designed to be visually a replica of Whitney. It's exciting for me to see one of my favorite comedians explore the social aspects of robotics and AI in our society. She also has some fascinating ideas about human behavior, psychology, and neurology, some of which she explores in her book called "I'm Fine...And Other Lies."
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
The episode is also supported by ZipRecruiter. Try it: http://ziprecruiter.com/lexpod
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:51 - Eye contact
04:42 - Robot gender
08:49 - Whitney's robot (Bearclaw)
12:17 - Human reaction to robots
14:09 - Fear of robots
25:15 - Surveillance
29:35 - Animals
35:01 - Compassion from people who own robots
37:55 - Passion
44:57 - Neurology
56:38 - Social media
1:04:35 - Love
1:13:40 - Mortality | |||
11 Dec 2019 | Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI | 01:23:21 | |
Judea Pearl is a professor at UCLA and a winner of the Turing Award, that's generally recognized as the Nobel Prize of computing. He is one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Bayesian Networks and profound ideas in causality in general. These ideas are important not just for AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lies at the core of what is currently missing and what must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:18 - Descartes and analytic geometry
06:25 - Good way to teach math
07:10 - From math to engineering
09:14 - Does God play dice?
10:47 - Free will
11:59 - Probability
22:21 - Machine learning
23:13 - Causal Networks
27:48 - Intelligent systems that reason with causation
29:29 - Do(x) operator
36:57 - Counterfactuals
44:12 - Reasoning by Metaphor
51:15 - Machine learning and causal reasoning
53:28 - Temporal aspect of causation
56:21 - Machine learning (continued)
59:15 - Human-level artificial intelligence
1:04:08 - Consciousness
1:04:31 - Concerns about AGI
1:09:53 - Religion and robotics
1:12:07 - Daniel Pearl
1:19:09 - Advice for students
1:21:00 - Legacy | |||
14 Dec 2019 | Rohit Prasad: Amazon Alexa and Conversational AI | 01:46:15 | |
Rohit Prasad is the vice president and head scientist of Amazon Alexa and one of its original creators.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
The episode is also supported by ZipRecruiter. Try it: http://ziprecruiter.com/lexpod
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
04:34 - Her
06:31 - Human-like aspects of smart assistants
08:39 - Test of intelligence
13:04 - Alexa prize
21:35 - What does it take to win the Alexa prize?
27:24 - Embodiment and the essence of Alexa
34:35 - Personality
36:23 - Personalization
38:49 - Alexa's backstory from her perspective
40:35 - Trust in Human-AI relations
44:00 - Privacy
47:45 - Is Alexa listening?
53:51 - How Alexa started
54:51 - Solving far-field speech recognition and intent understanding
1:11:51 - Alexa main categories of skills
1:13:19 - Conversation intent modeling
1:17:47 - Alexa memory and long-term learning
1:22:50 - Making Alexa sound more natural
1:27:16 - Open problems for Alexa and conversational AI
1:29:26 - Emotion recognition from audio and video
1:30:53 - Deep learning and reasoning
1:36:26 - Future of Alexa
1:41:47 - The big picture of conversational AI | |||
17 Dec 2019 | Michael Stevens: Vsauce | 00:58:55 | |
Michael Stevens is the creator of Vsauce, one of the most popular educational YouTube channel in the world, with over 15 million subscribers and over 1.7 billion views. His videos often ask and answer questions that are both profound and entertaining, spanning topics from physics to psychology. As part of his channel he created 3 seasons of Mind Field, a series that explored human behavior.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode links:
Vsauce YouTube: https://www.youtube.com/Vsauce
Vsauce Twitter: https://twitter.com/tweetsauce
Vsauce Instagram: https://www.instagram.com/electricpants/
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:26 - Psychology
03:59 - Consciousness
06:55 - Free will
07:55 - Perception vs reality
09:59 - Simulation
11:32 - Science
16:24 - Flat earth
27:04 - Artificial Intelligence
30:14 - Existential threats
38:03 - Elon Musk and the responsibility of having a large following
43:05 - YouTube algorithm
52:41 - Mortality and the meaning of life | |||
21 Dec 2019 | Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | 01:19:01 | |
Sebastian Thrun is one of the greatest roboticists, computer scientists, and educators of our time. He led development of the autonomous vehicles at Stanford that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge. He then led the Google self-driving car program which launched the self-driving revolution. He taught the popular Stanford course on Artificial Intelligence in 2011 which was one of the first MOOCs. That experience led him to co-found Udacity, an online education platform. He is also the CEO of Kitty Hawk, a company working on building flying cars or more technically eVTOLS which stands for electric vertical take-off and landing aircraft.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:24 - The Matrix
04:39 - Predicting the future 30+ years ago
06:14 - Machine learning and expert systems
09:18 - How to pick what ideas to work on
11:27 - DARPA Grand Challenges
17:33 - What does it take to be a good leader?
23:44 - Autonomous vehicles
38:42 - Waymo and Tesla Autopilot
42:11 - Self-Driving Car Nanodegree
47:29 - Machine learning
51:10 - AI in medical applications
54:06 - AI-related job loss and education
57:51 - Teaching soft skills
1:00:13 - Kitty Hawk and flying cars
1:08:22 - Love and AI
1:13:12 - Life | |||
25 Dec 2019 | Jim Gates: Supersymmetry, String Theory and Proving Einstein Right | 01:35:28 | |
Jim Gates (S James Gates Jr.) is a theoretical physicist and professor at Brown University working on supersymmetry, supergravity, and superstring theory. He served on former President Obama's Council of Advisors on Science and Technology. He is the co-author of a new book titled Proving Einstein Right about the scientists who set out to prove Einstein's theory of relativity.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode Links:
Proving Einstein Right (book)
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:13 - Will we ever venture outside our solar system?
05:16 - When will the first human step foot on Mars?
11:14 - Are we alone in the universe?
13:55 - Most beautiful idea in physics
16:29 - Can the mind be digitized?
21:15 - Does the possibility of superintelligence excite you?
22:25 - Role of dreaming in creativity and mathematical thinking
30:51 - Existential threats
31:46 - Basic particles underlying our universe
41:28 - What is supersymmetry?
52:19 - Adinkra symbols
1:00:24 - String theory
1:07:02 - Proving Einstein right and experimental validation of general relativity
1:19:07 - Richard Feynman
1:22:01 - Barack Obama's Council of Advisors on Science and Technology
1:30:20 - Exciting problems in physics that are just within our reach
1:31:26 - Mortality | |||
28 Dec 2019 | Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | 01:53:07 | |
Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode Links:
AI: A Guide for Thinking Humans (book)
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:33 - The term "artificial intelligence"
06:30 - Line between weak and strong AI
12:46 - Why have people dreamed of creating AI?
15:24 - Complex systems and intelligence
18:38 - Why are we bad at predicting the future with regard to AI?
22:05 - Are fundamental breakthroughs in AI needed?
25:13 - Different AI communities
31:28 - Copycat cognitive architecture
36:51 - Concepts and analogies
55:33 - Deep learning and the formation of concepts
1:09:07 - Autonomous vehicles
1:20:21 - Embodied AI and emotion
1:25:01 - Fear of superintelligent AI
1:36:14 - Good test for intelligence
1:38:09 - What is complexity?
1:43:09 - Santa Fe Institute
1:47:34 - Douglas Hofstadter
1:49:42 - Proudest moment | |||
30 Dec 2019 | Donald Knuth: Algorithms, TeX, Life, and The Art of Computer Programming | 01:46:13 | |
Donald Knuth is one of the greatest and most impactful computer scientists and mathematicians ever. He is the recipient in 1974 of the Turing Award, considered the Nobel Prize of computing. He is the author of the multi-volume work, the magnum opus, The Art of Computer Programming. He made several key contributions to the rigorous analysis of the computational complexity of algorithms. He popularized asymptotic notation, that we all affectionately know as the big-O notation. He also created the TeX typesetting which most computer scientists, physicists, mathematicians, and scientists and engineers use to write technical papers and make them look beautiful.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode Links:
The Art of Computer Programming (book set)
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:45 - IBM 650
07:51 - Geeks
12:29 - Alan Turing
14:26 - My life is a convex combination of english and mathematics
24:00 - Japanese arrow puzzle example
25:42 - Neural networks and machine learning
27:59 - The Art of Computer Programming
36:49 - Combinatorics
39:16 - Writing process
42:10 - Are some days harder than others?
48:36 - What's the "Art" in the Art of Computer Programming
50:21 - Binary (boolean) decision diagram
55:06 - Big-O notation
58:02 - P=NP
1:10:05 - Artificial intelligence
1:13:26 - Ant colonies and human cognition
1:17:11 - God and the Bible
1:24:28 - Reflection on life
1:28:25 - Facing mortality
1:33:40 - TeX and beautiful typography
1:39:23 - How much of the world do we understand?
1:44:17 - Question for God | |||
03 Jan 2020 | Stephen Kotkin: Stalin, Putin, and the Nature of Power | 01:37:49 | |
Stephen Kotkin is a professor of history at Princeton university and one of the great historians of our time, specializing in Russian and Soviet history. He has written many books on Stalin and the Soviet Union including the first 2 of a 3 volume work on Stalin, and he is currently working on volume 3.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Episode Links:
Stalin (book, vol 1): https://amzn.to/2FjdLF2
Stalin (book, vol 2): https://amzn.to/2tqyjc3
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:10 - Do all human beings crave power?
11:29 - Russian people and authoritarian power
15:06 - Putin and the Russian people
23:23 - Corruption in Russia
31:30 - Russia's future
41:07 - Individuals and institutions
44:42 - Stalin's rise to power
1:05:20 - What is the ideal political system?
1:21:10 - Questions for Putin
1:29:41 - Questions for Stalin
1:33:25 - Will there always be evil in the world? | |||
07 Jan 2020 | Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | 01:03:12 | |
Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
01:56 - What kind of math would aliens have?
03:48 - Euler's identity and the least favorite piece of notation
10:31 - Is math discovered or invented?
14:30 - Difference between physics and math
17:24 - Why is reality compressible into simple equations?
21:44 - Are we living in a simulation?
26:27 - Infinity and abstractions
35:48 - Most beautiful idea in mathematics
41:32 - Favorite video to create
45:04 - Video creation process
50:04 - Euler identity
51:47 - Mortality and meaning
55:16 - How do you know when a video is done?
56:18 - What is the best way to learn math for beginners?
59:17 - Happy moment | |||
14 Jan 2020 | Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | 01:19:09 | |
Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book "Thinking, Fast and Slow" that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:36 - Lessons about human behavior from WWII
08:19 - System 1 and system 2: thinking fast and slow
15:17 - Deep learning
30:01 - How hard is autonomous driving?
35:59 - Explainability in AI and humans
40:08 - Experiencing self and the remembering self
51:58 - Man's Search for Meaning by Viktor Frankl
54:46 - How much of human behavior can we study in the lab?
57:57 - Collaboration
1:01:09 - Replication crisis in psychology
1:09:28 - Disagreements and controversies in psychology
1:13:01 - Test for AGI
1:16:17 - Meaning of life | |||
17 Jan 2020 | Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems | 01:40:24 | |
Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:09 - Favorite robot
05:05 - Autonomous vehicles
08:43 - Tesla Autopilot
20:03 - Ethical responsibility of safety-critical algorithms
28:11 - Bias in robotics
38:20 - AI in politics and law
40:35 - Solutions to bias in algorithms
47:44 - HAL 9000
49:57 - Memories from working at NASA
51:53 - SpotMini and Bionic Woman
54:27 - Future of robots in space
57:11 - Human-robot interaction
1:02:38 - Trust
1:09:26 - AI in education
1:15:06 - Andrew Yang, automation, and job loss
1:17:17 - Love, AI, and the movie Her
1:25:01 - Why do so many robotics companies fail?
1:32:22 - Fear of robots
1:34:17 - Existential threats of AI
1:35:57 - Matrix
1:37:37 - Hang out for a day with a robot | |||
21 Jan 2020 | Paul Krugman: Economics of Innovation, Automation, Safety Nets & Universal Basic Income | 01:03:39 | |
Paul Krugman is a Nobel Prize winner in economics, professor at CUNY, and columnist at the New York Times. His academic work centers around international economics, economic geography, liquidity traps, and currency crises.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:44 - Utopia from an economics perspective
04:51 - Competition
06:33 - Well-informed citizen
07:52 - Disagreements in economics
09:57 - Metrics of outcomes
13:00 - Safety nets
15:54 - Invisible hand of the market
21:43 - Regulation of tech sector
22:48 - Automation
25:51 - Metric of productivity
30:35 - Interaction of the economy and politics
33:48 - Universal basic income
36:40 - Divisiveness of political discourse
42:53 - Economic theories
52:25 - Starting a system on Mars from scratch
55:11 - International trade
59:08 - Writing in a time of radicalization and Twitter mobs | |||
25 Jan 2020 | Cristos Goodrow: YouTube Algorithm | 01:31:19 | |
Cristos Goodrow is VP of Engineering at Google and head of Search and Discovery at YouTube (aka YouTube Algorithm).
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
03:26 - Life-long trajectory through YouTube
07:30 - Discovering new ideas on YouTube
13:33 - Managing healthy conversation
23:02 - YouTube Algorithm
38:00 - Analyzing the content of video itself
44:38 - Clickbait thumbnails and titles
47:50 - Feeling like I'm helping the YouTube algorithm get smarter
50:14 - Personalization
51:44 - What does success look like for the algorithm?
54:32 - Effect of YouTube on society
57:24 - Creators
59:33 - Burnout
1:03:27 - YouTube algorithm: heuristics, machine learning, human behavior
1:08:36 - How to make a viral video?
1:10:27 - Veritasium: Why Are 96,000,000 Black Balls on This Reservoir?
1:13:20 - Making clips from long-form podcasts
1:18:07 - Moment-by-moment signal of viewer interest
1:20:04 - Why is video understanding such a difficult AI problem?
1:21:54 - Self-supervised learning on video
1:25:44 - What does YouTube look like 10, 20, 30 years from now? | |||
29 Jan 2020 | David Chalmers: The Hard Problem of Consciousness | 01:39:06 | |
David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as "why does the feeling which accompanies awareness of sensory information exist at all?"
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:23 - Nature of reality: Are we living in a simulation?
19:19 - Consciousness in virtual reality
27:46 - Music-color synesthesia
31:40 - What is consciousness?
51:25 - Consciousness and the meaning of life
57:33 - Philosophical zombies
1:01:38 - Creating the illusion of consciousness
1:07:03 - Conversation with a clone
1:11:35 - Free will
1:16:35 - Meta-problem of consciousness
1:18:40 - Is reality an illusion?
1:20:53 - Descartes' evil demon
1:23:20 - Does AGI need conscioussness?
1:33:47 - Exciting future
1:35:32 - Immortality | |||
05 Feb 2020 | Jim Keller: Moore’s Law, Microprocessors, Abstractions, and First Principles | 01:35:11 | |
Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He's known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:12 - Difference between a computer and a human brain
03:43 - Computer abstraction layers and parallelism
17:53 - If you run a program multiple times, do you always get the same answer?
20:43 - Building computers and teams of people
22:41 - Start from scratch every 5 years
30:05 - Moore's law is not dead
55:47 - Is superintelligence the next layer of abstraction?
1:00:02 - Is the universe a computer?
1:03:00 - Ray Kurzweil and exponential improvement in technology
1:04:33 - Elon Musk and Tesla Autopilot
1:20:51 - Lessons from working with Elon Musk
1:28:33 - Existential threats from AI
1:32:38 - Happiness and the meaning of life | |||
14 Feb 2020 | Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | 01:45:23 | |
Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
02:55 - Alan Turing: science and engineering of intelligence
09:09 - What is a predicate?
14:22 - Plato's world of ideas and world of things
21:06 - Strong and weak convergence
28:37 - Deep learning and the essence of intelligence
50:36 - Symbolic AI and logic-based systems
54:31 - How hard is 2D image understanding?
1:00:23 - Data
1:06:39 - Language
1:14:54 - Beautiful idea in statistical theory of learning
1:19:28 - Intelligence and heuristics
1:22:23 - Reasoning
1:25:11 - Role of philosophy in learning theory
1:31:40 - Music (speaking in Russian)
1:35:08 - Mortality | |||
17 Feb 2020 | #72 – Scott Aaronson: Quantum Computing | 01:34:09 | |
Scott Aaronson is a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT. His research interests center around the capabilities and limits of quantum computers and computational complexity theory more generally.
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching "Ride Home" in your podcast app.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 - Introduction
05:07 - Role of philosophy in science
29:27 - What is a quantum computer?
41:12 - Quantum decoherence (noise in quantum information)
49:22 - Quantum computer engineering challenges
51:00 - Moore's Law
56:33 - Quantum supremacy
1:12:18 - Using quantum computers to break cryptography
1:17:11 - Practical application of quantum computers
1:22:18 - Quantum machine learning, questionable claims, and cautious optimism
1:30:53 - Meaning of life | |||
20 Feb 2020 | #73 – Andrew Ng: Deep Learning, Education, and Real-World AI | 01:29:29 | |
Andrew Ng is one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched deeplearning.ai, Landing.ai, and the AI fund, and was the Chief Scientist at Baidu. As a Stanford professor, and with Coursera and deeplearning.ai, he has helped educate and inspire millions of students including me.
EPISODE LINKS:
Andrew Twitter: https://twitter.com/AndrewYNg
Andrew Facebook: https://www.facebook.com/andrew.ng.96
Andrew LinkedIn: https://www.linkedin.com/in/andrewyng/
deeplearning.ai: https://www.deeplearning.ai
landing.ai: https://landing.ai
AI Fund: https://aifund.ai/
AI for Everyone: https://www.coursera.org/learn/ai-for-everyone
The Batch newsletter: https://www.deeplearning.ai/thebatch/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching "Ride Home" in your podcast app.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:23 - First few steps in AI
05:05 - Early days of online education
16:07 - Teaching on a whiteboard
17:46 - Pieter Abbeel and early research at Stanford
23:17 - Early days of deep learning
32:55 - Quick preview: deeplearning.ai, landing.ai, and AI fund
33:23 - deeplearning.ai: how to get started in deep learning
45:55 - Unsupervised learning
49:40 - deeplearning.ai (continued)
56:12 - Career in deep learning
58:56 - Should you get a PhD?
1:03:28 - AI fund - building startups
1:11:14 - Landing.ai - growing AI efforts in established companies
1:20:44 - Artificial general intelligence | |||
24 Feb 2020 | #74 – Michael I. Jordan: Machine Learning, Recommender Systems, and the Future of AI | 01:46:17 | |
Michael I. Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio.
EPISODE LINKS:
(Blog post) Artificial Intelligence—The Revolution Hasn’t Happened Yet
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:02 - How far are we in development of AI?
08:25 - Neuralink and brain-computer interfaces
14:49 - The term "artificial intelligence"
19:00 - Does science progress by ideas or personalities?
19:55 - Disagreement with Yann LeCun
23:53 - Recommender systems and distributed decision-making at scale
43:34 - Facebook, privacy, and trust
1:01:11 - Are human beings fundamentally good?
1:02:32 - Can a human life and society be modeled as an optimization problem?
1:04:27 - Is the world deterministic?
1:04:59 - Role of optimization in multi-agent systems
1:09:52 - Optimization of neural networks
1:16:08 - Beautiful idea in optimization: Nesterov acceleration
1:19:02 - What is statistics?
1:29:21 - What is intelligence?
1:37:01 - Advice for students
1:39:57 - Which language is more beautiful: English or French? | |||
26 Feb 2020 | #75 – Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | 01:40:23 | |
Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning.
EPISODE LINKS:
Hutter Prize: http://prize.hutter1.net
Marcus web: http://www.hutter1.net
Books mentioned:
- Universal AI: https://amzn.to/2waIAuw
- AI: A Modern Approach: https://amzn.to/3camxnY
- Reinforcement Learning: https://amzn.to/2PoANj9
- Theory of Knowledge: https://amzn.to/3a6Vp7x
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:32 - Universe as a computer
05:48 - Occam's razor
09:26 - Solomonoff induction
15:05 - Kolmogorov complexity
20:06 - Cellular automata
26:03 - What is intelligence?
35:26 - AIXI - Universal Artificial Intelligence
1:05:24 - Where do rewards come from?
1:12:14 - Reward function for human existence
1:13:32 - Bounded rationality
1:16:07 - Approximation in AIXI
1:18:01 - Godel machines
1:21:51 - Consciousness
1:27:15 - AGI community
1:32:36 - Book recommendations
1:36:07 - Two moments to relive (past and future) | |||
29 Feb 2020 | #76 – John Hopfield: Physics View of the Mind and Neurobiology | 01:13:16 | |
John Hopfield is professor at Princeton, whose life's work weaved beautifully through biology, chemistry, neuroscience, and physics. Most crucially, he saw the messy world of biology through the piercing eyes of a physicist. He is perhaps best known for his work on associate neural networks, now known as Hopfield networks that were one of the early ideas that catalyzed the development of the modern field of deep learning.
EPISODE LINKS:
Now What? article: http://bit.ly/3843LeU
John wikipedia: https://en.wikipedia.org/wiki/John_Hopfield
Books mentioned:
- Einstein's Dreams: https://amzn.to/2PBa96X
- Mind is Flat: https://amzn.to/2I3YB84
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:35 - Difference between biological and artificial neural networks
08:49 - Adaptation
13:45 - Physics view of the mind
23:03 - Hopfield networks and associative memory
35:22 - Boltzmann machines
37:29 - Learning
39:53 - Consciousness
48:45 - Attractor networks and dynamical systems
53:14 - How do we build intelligent systems?
57:11 - Deep thinking as the way to arrive at breakthroughs
59:12 - Brain-computer interfaces
1:06:10 - Mortality
1:08:12 - Meaning of life | |||
03 Mar 2020 | #77 – Alex Garland: Ex Machina, Devs, Annihilation, and the Poetry of Science | 01:11:37 | |
Alex Garland is a writer and director of many imaginative and philosophical films from the dreamlike exploration of human self-destruction in the movie Annihilation to the deep questions of consciousness and intelligence raised in the movie Ex Machina, which to me is one of the greatest movies on artificial intelligence ever made. I'm releasing this podcast to coincide with the release of his new series called Devs that will premiere this Thursday, March 5, on Hulu.
EPISODE LINKS:
Devs: https://hulu.tv/2x35HaH
Annihilation: https://hulu.tv/3ai9Eqk
Ex Machina: https://www.netflix.com/title/80023689
Alex IMDb: https://www.imdb.com/name/nm0307497/
Alex Wiki: https://en.wikipedia.org/wiki/Alex_Garland
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:42 - Are we living in a dream?
07:15 - Aliens
12:34 - Science fiction: imagination becoming reality
17:29 - Artificial intelligence
22:40 - The new "Devs" series and the veneer of virtue in Silicon Valley
31:50 - Ex Machina and 2001: A Space Odyssey
44:58 - Lone genius
49:34 - Drawing inpiration from Elon Musk
51:24 - Space travel
54:03 - Free will
57:35 - Devs and the poetry of science
1:06:38 - What will you be remembered for? | |||
05 Mar 2020 | #78 – Ann Druyan: Cosmos, Carl Sagan, Voyager, and the Beauty of Science | 01:09:40 | |
Ann Druyan is the writer, producer, director, and one of the most important and impactful communicators of science in our time. She co-wrote the 1980 science documentary series Cosmos hosted by Carl Sagan, whom she married in 1981, and her love for whom, with the help of NASA, was recorded as brain waves on a golden record along with other things our civilization has to offer and launched into space on the Voyager 1 and Voyager 2 spacecraft that are now, 42 years later, still active, reaching out farther into deep space than any human-made object ever has. This was a profound and beautiful decision she made as a Creative Director of NASA's Voyager Interstellar Message Project. In 2014, she went on to create the second season of Cosmos, called Cosmos: A Spacetime Odyssey, and in 2020, the new third season called Cosmos: Possible Worlds, which is being released this upcoming Monday, March 9. It is hosted, once again, by the fun and brilliant Neil deGrasse Tyson.
EPISODE LINKS:
Cosmos Twitter: https://twitter.com/COSMOSonTV
Cosmos Website: https://fox.tv/CosmosOnTV
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:24 - Role of science in society
07:04 - Love and science
09:07 - Skepticism in science
14:15 - Voyager, Carl Sagan, and the Golden Record
36:41 - Cosmos
53:22 - Existential threats
1:00:36 - Origin of life
1:04:22 - Mortality | |||
07 Mar 2020 | #79 – Lee Smolin: Quantum Gravity and Einstein’s Unfinished Revolution | 01:10:19 | |
Lee Smolin is a theoretical physicist, co-inventor of loop quantum gravity, and a contributor of many interesting ideas to cosmology, quantum field theory, the foundations of quantum mechanics, theoretical biology, and the philosophy of science. He is the author of several books including one that critiques the state of physics and string theory called The Trouble with Physics, and his latest book, Einstein's Unfinished Revolution: The Search for What Lies Beyond the Quantum.
EPISODE LINKS:
Books mentioned:
- Einstein's Unfinished Revolution by Lee Smolin: https://amzn.to/2TsF5c3
- The Trouble With Physics by Lee Smolin: https://amzn.to/2v1FMzy
- Against Method by Paul Feyerabend: https://amzn.to/2VOPXCD
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:03 - What is real?
05:03 - Scientific method and scientific progress
24:57 - Eric Weinstein and radical ideas in science
29:32 - Quantum mechanics and general relativity
47:24 - Sean Carroll and many-worlds interpretation of quantum mechanics
55:33 - Principles in science
57:24 - String theory | |||
16 Mar 2020 | #80 – Vitalik Buterin: Ethereum, Cryptocurrency, and the Future of Money | 01:35:31 | |
Vitalik Buterin is co-creator of Ethereum and ether, which is a cryptocurrency that is currently the second-largest digital currency after bitcoin. Ethereum has a lot of interesting technical ideas that are defining the future of blockchain technology, and Vitalik is one of the most brilliant people innovating this space today.
Support this podcast by supporting the sponsors with a special code:
- Get ExpressVPN at https://www.expressvpn.com/lexpod
- Sign up to MasterClass at https://masterclass.com/lex
EPISODE LINKS:
Vitalik blog: https://vitalik.ca
Ethereum whitepaper: http://bit.ly/3cVDTpj
Casper FFG (paper): http://bit.ly/2U6j7dJ
Quadratic funding (paper): http://bit.ly/3aUZ8Wd
Bitcoin whitepaper: https://bitcoin.org/bitcoin.pdf
Mastering Ethereum (book): https://amzn.to/2xEjWmE
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
04:43 - Satoshi Nakamoto
08:40 - Anonymity
11:31 - Open source project leadership
13:04 - What is money?
30:02 - Blockchain and cryptocurrency basics
46:51 - Ethereum
59:23 - Proof of work
1:02:12 - Ethereum 2.0
1:13:09 - Beautiful ideas in Ethereum
1:16:59 - Future of cryptocurrency
1:22:06 - Cryptocurrency resources and people to follow
1:24:28 - Role of governments
1:27:27 - Meeting Putin
1:29:41 - Large number of cryptocurrencies
1:32:49 - Mortality | |||
19 Mar 2020 | #81 – Anca Dragan: Human-Robot Interaction and Reward Engineering | 01:39:01 | |
Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot's function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.
Support this podcast by supporting the sponsors and using the special code:
- Download Cash App on the App Store or Google Play & use code "LexPodcast"
EPISODE LINKS:
Anca's Twitter: https://twitter.com/ancadianadragan
Anca's Website: https://people.eecs.berkeley.edu/~anca/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:26 - Interest in robotics
05:32 - Computer science
07:32 - Favorite robot
13:25 - How difficult is human-robot interaction?
32:01 - HRI application domains
34:24 - Optimizing the beliefs of humans
45:59 - Difficulty of driving when humans are involved
1:05:02 - Semi-autonomous driving
1:10:39 - How do we specify good rewards?
1:17:30 - Leaked information from human behavior
1:21:59 - Three laws of robotics
1:26:31 - Book recommendation
1:29:02 - If a doctor gave you 5 years to live...
1:32:48 - Small act of kindness
1:34:31 - Meaning of life | |||
21 Mar 2020 | #82 – Simon Sinek: Leadership, Hard Work, Optimism and the Infinite Game | 00:38:16 | |
Simon Sinek is an author of several books including Start With Why, Leaders Eat Last, and his latest The Infinite Game. He is one of the best communicators of what it takes to be a good leader, to inspire, and to build businesses that solve big difficult challenges.
Support this podcast by signing up with these sponsors:
- MasterClass: https://masterclass.com/lex
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Simon twitter: https://twitter.com/simonsinek
Simon facebook: https://www.facebook.com/simonsinek
Simon website: https://simonsinek.com/
Books:
- Infinite Game: https://amzn.to/2WxBH1i
- Leaders Eat Last: https://amzn.to/2xf70Ds
- Start with Why: https://amzn.to/2WxBH1i
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
0:00 - Introduction
3:50 - Meaning of life as an infinite game
10:13 - Optimism
13:30 - Mortality
17:52 - Hard work
26:38 - Elon Musk, Steve Jobs, and leadership | |||
26 Mar 2020 | #83 – Nick Bostrom: Simulation and Superintelligence | 01:57:06 | |
Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.
Support this podcast by signing up with these sponsors:
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Nick's website: https://nickbostrom.com/
Future of Humanity Institute:
- https://twitter.com/fhioxford
- https://www.fhi.ox.ac.uk/
Books:
- Superintelligence: https://amzn.to/2JckX83
Wikipedia:
- https://en.wikipedia.org/wiki/Simulation_hypothesis
- https://en.wikipedia.org/wiki/Principle_of_indifference
- https://en.wikipedia.org/wiki/Doomsday_argument
- https://en.wikipedia.org/wiki/Global_catastrophic_risk
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:48 - Simulation hypothesis and simulation argument
12:17 - Technologically mature civilizations
15:30 - Case 1: if something kills all possible civilizations
19:08 - Case 2: if we lose interest in creating simulations
22:03 - Consciousness
26:27 - Immersive worlds
28:50 - Experience machine
41:10 - Intelligence and consciousness
48:58 - Weighing probabilities of the simulation argument
1:01:43 - Elaborating on Joe Rogan conversation
1:05:53 - Doomsday argument and anthropic reasoning
1:23:02 - Elon Musk
1:25:26 - What's outside the simulation?
1:29:52 - Superintelligence
1:47:27 - AGI utopia
1:52:41 - Meaning of life | |||
31 Mar 2020 | #85 – Roger Penrose: Physics of Consciousness and the Infinite Universe | 01:28:25 | |
Roger Penrose is physicist, mathematician, and philosopher at University of Oxford. He has made fundamental contributions in many disciplines from the mathematical physics of general relativity and cosmology to the limitations of a computational view of consciousness.
Support this podcast by signing up with these sponsors:
- ExpressVPN at https://www.expressvpn.com/lexpod
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Cycles of Time (book): https://amzn.to/39tXtpp
The Emperor's New Mind (book): https://amzn.to/2yfeVkD
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:51 - 2001: A Space Odyssey
09:43 - Consciousness and computation
23:45 - What does it mean to "understand"
31:37 - What's missing in quantum mechanics?
40:09 - Whatever consciousness is, it's not a computation
44:13 - Source of consciousness in the human brain
1:02:57 - Infinite cycles of big bangs
1:22:05 - Most beautiful idea in mathematics | |||
03 Apr 2020 | #86 – David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning | 01:48:28 | |
David Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo, AlphaZero and co-lead on AlphaStar, and MuZero and lot of important work in reinforcement learning.
Support this podcast by signing up with these sponsors:
- MasterClass: https://masterclass.com/lex
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Reinforcement learning (book): https://amzn.to/2Jwp5zG
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
04:09 - First program
11:11 - AlphaGo
21:42 - Rule of the game of Go
25:37 - Reinforcement learning: personal journey
30:15 - What is reinforcement learning?
43:51 - AlphaGo (continued)
53:40 - Supervised learning and self play in AlphaGo
1:06:12 - Lee Sedol retirement from Go play
1:08:57 - Garry Kasparov
1:14:10 - Alpha Zero and self play
1:31:29 - Creativity in AlphaZero
1:35:21 - AlphaZero applications
1:37:59 - Reward functions
1:40:51 - Meaning of life | |||
09 Apr 2020 | #87 – Richard Dawkins: Evolution, Intelligence, Simulation, and Memes | 01:07:48 | |
Richard Dawkins is an evolutionary biologist, and author of The Selfish Gene, The Blind Watchmaker, The God Delusion, The Magic of Reality, The Greatest Show on Earth, and his latest Outgrowing God. He is the originator and popularizer of a lot of fascinating ideas in evolutionary biology and science in general, including funny enough the introduction of the word meme in his 1976 book The Selfish Gene, which in the context of a gene-centered view of evolution is an exceptionally powerful idea. He is outspoken, bold, and often fearless in his defense of science and reason, and in this way, is one of the most influential thinkers of our time.
Support this podcast by signing up with these sponsors:
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Richard's Website: https://www.richarddawkins.net/
Richard's Twitter: https://twitter.com/RichardDawkins
Richard's Books:
- Selfish Gene: https://amzn.to/34tpHQy
- The Magic of Reality: https://amzn.to/3c0aqZQ
- The Blind Watchmaker: https://amzn.to/2RqV5tH
- The God Delusion: https://amzn.to/2JPrxlc
- Outgrowing God: https://amzn.to/3ebFess
- The Greatest Show on Earth: https://amzn.to/2Rp2j1h
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:31 - Intelligent life in the universe
05:03 - Engineering intelligence (are there shortcuts?)
07:06 - Is the evolutionary process efficient?
10:39 - Human brain and AGI
15:31 - Memes
26:37 - Does society need religion?
33:10 - Conspiracy theories
39:10 - Where do morals come from in humans?
46:10 - AI began with the ancient wish to forge the gods
49:18 - Simulation
56:58 - Books that influenced you
1:02:53 - Meaning of life | |||
13 Apr 2020 | #88 – Eric Weinstein: Geometric Unity and the Call for New Ideas, Leaders & Institutions | 02:47:04 | |
Eric Weinstein is a mathematician with a bold and piercing intelligence, unafraid to explore the biggest questions in the universe and shine a light on the darkest corners of our society. He is the host of The Portal podcast, a part of which, he recently released his 2013 Oxford lecture on his theory of Geometric Unity that is at the center of his lifelong efforts in arriving at a theory of everything that unifies the fundamental laws of physics.
Support this podcast by signing up with these sponsors:
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Eric's Twitter: https://twitter.com/EricRWeinstein
Eric's YouTube: https://www.youtube.com/ericweinsteinphd
The Portal podcast: https://podcasts.apple.com/us/podcast/the-portal/id1469999563
Graph, Wall, Tome wiki: https://theportal.wiki/wiki/Graph,_Wall,_Tome
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:08 - World War II and the Coronavirus Pandemic
14:03 - New leaders
31:18 - Hope for our time
34:23 - WHO
44:19 - Geometric unity
1:38:55 - We need to get off this planet
1:40:47 - Elon Musk
1:46:58 - Take Back MIT
2:15:31 - The time at Harvard
2:37:01 - The Portal
2:42:58 - Legacy | |||
18 Apr 2020 | #89 – Stephen Wolfram: Cellular Automata, Computation, and Physics | 03:11:36 | |
Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist who is the founder and CEO of Wolfram Research, a company behind Mathematica, Wolfram Alpha, Wolfram Language, and the new Wolfram Physics project. He is the author of several books including A New Kind of Science, which on a personal note was one of the most influential books in my journey in computer science and artificial intelligence.
Support this podcast by signing up with these sponsors:
- ExpressVPN at https://www.expressvpn.com/lexpod
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Stephen's Twitter: https://twitter.com/stephen_wolfram
Stephen's Website: https://www.stephenwolfram.com/
Wolfram Research Twitter: https://twitter.com/WolframResearch
Wolfram Research YouTube: https://www.youtube.com/user/WolframResearch
Wolfram Research Website: https://www.wolfram.com/
Wolfram Alpha: https://www.wolframalpha.com/
A New Kind of Science (book): https://amzn.to/34JruB2
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
04:16 - Communicating with an alien intelligence
12:11 - Monolith in 2001: A Space Odyssey
29:06 - What is computation?
44:54 - Physics emerging from computation
1:14:10 - Simulation
1:19:23 - Fundamental theory of physics
1:28:01 - Richard Feynman
1:39:57 - Role of ego in science
1:47:21 - Cellular automata
2:15:08 - Wolfram language
2:55:14 - What is intelligence?
2:57:47 - Consciousness
3:02:36 - Mortality
3:05:47 - Meaning of life | |||
22 Apr 2020 | #90 – Dmitry Korkin: Computational Biology of Coronavirus | 02:09:30 | |
Dmitry Korkin is a professor of bioinformatics and computational biology at Worcester Polytechnic Institute, where he specializes in bioinformatics of complex disease, computational genomics, systems biology, and biomedical data analytics. I came across Dmitry's work when in February his group used the viral genome of the COVID-19 to reconstruct the 3D structure of its major viral proteins and their interactions with human proteins, in effect creating a structural genomics map of the coronavirus and making this data open and available to researchers everywhere. We talked about the biology of COVID-19, SARS, and viruses in general, and how computational methods can help us understand their structure and function in order to develop antiviral drugs and vaccines.
Support this podcast by signing up with these sponsors:
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Dmitry's Website: http://korkinlab.org/
Dmitry's Twitter: https://twitter.com/dmkorkin
Dmitry's Paper that we discuss: https://bit.ly/3eKghEM
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:33 - Viruses are terrifying and fascinating
06:02 - How hard is it to engineer a virus?
10:48 - What makes a virus contagious?
29:52 - Figuring out the function of a protein
53:27 - Functional regions of viral proteins
1:19:09 - Biology of a coronavirus treatment
1:34:46 - Is a virus alive?
1:37:05 - Epidemiological modeling
1:55:27 - Russia
2:02:31 - Science bobbleheads
2:06:31 - Meaning of life | |||
24 Apr 2020 | #91 – Jack Dorsey: Square, Cryptocurrency, and Artificial Intelligence | 00:51:42 | |
Jack Dorsey is the co-founder and CEO of Twitter and the founder and CEO of Square.
Support this podcast by signing up with these sponsors:
- MasterClass: https://masterclass.com/lex
EPISODE LINKS:
Jack's Twitter: https://twitter.com/jack
Start Small Tracker: https://bit.ly/2KxdiBL
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:48 - Engineering at scale
08:36 - Increasing access to the economy
13:09 - Machine learning at Square
15:18 - Future of the digital economy
17:17 - Cryptocurrency
25:31 - Artificial intelligence
27:49 - Her
29:12 - Exchange with Elon Musk about bots
32:05 - Concerns about artificial intelligence
35:40 - Andrew Yang
40:57 - Eating one meal a day
45:49 - Mortality
47:50 - Meaning of life
48:59 - Simulation | |||
29 Apr 2020 | #92 – Harry Cliff: Particle Physics and the Large Hadron Collider | 01:38:47 | |
Harry Cliff is a particle physicist at the University of Cambridge working on the Large Hadron Collider beauty experiment that specializes in searching for hints of new particles and forces by studying a type of particle called the "beauty quark", or "b quark". In this way, he is part of the group of physicists who are searching answers to some of the biggest questions in modern physics. He is also an exceptional communicator of science with some of the clearest and most captivating explanations of basic concepts in particle physics I've ever heard.
Support this podcast by signing up with these sponsors:
– ExpressVPN at https://www.expressvpn.com/lexpod
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Harry's Website: https://www.harrycliff.co.uk/
Harry's Twitter: https://twitter.com/harryvcliff
Beyond the Higgs Lecture: https://www.youtube.com/watch?v=edvdzh9Pggg
Harry's stand-up: https://www.youtube.com/watch?v=dnediKM_Sts
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:51 - LHC and particle physics
13:55 - History of particle physics
38:59 - Higgs particle
57:55 - Unknowns yet to be discovered
59:48 - Beauty quarks
1:07:38 - Matter and antimatter
1:10:22 - Human side of the Large Hadron Collider
1:17:27 - Future of large particle colliders
1:24:09 - Data science with particle physics
1:27:17 - Science communication
1:33:36 - Most beautiful idea in physics | |||
05 May 2020 | #93 – Daphne Koller: Biomedicine and Machine Learning | 01:12:31 | |
Daphne Koller is a professor of computer science at Stanford University, a co-founder of Coursera with Andrew Ng and Founder and CEO of insitro, a company at the intersection of machine learning and biomedicine.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Daphne's Twitter: https://twitter.com/daphnekoller
Daphne's Website: https://ai.stanford.edu/users/koller/index.html
Insitro: http://insitro.com
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:22 - Will we one day cure all disease?
06:31 - Longevity
10:16 - Role of machine learning in treating diseases
13:05 - A personal journey to medicine
16:25 - Insitro and disease-in-a-dish models
33:25 - What diseases can be helped with disease-in-a-dish approaches?
36:43 - Coursera and education
49:04 - Advice to people interested in AI
50:52 - Beautiful idea in deep learning
55:10 - Uncertainty in AI
58:29 - AGI and AI safety
1:06:52 - Are most people good?
1:09:04 - Meaning of life | |||
08 May 2020 | #94 – Ilya Sutskever: Deep Learning | 01:37:55 | |
Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Ilya's Twitter: https://twitter.com/ilyasut
Ilya's Website: https://www.cs.toronto.edu/~ilya/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
02:23 - AlexNet paper and the ImageNet moment
08:33 - Cost functions
13:39 - Recurrent neural networks
16:19 - Key ideas that led to success of deep learning
19:57 - What's harder to solve: language or vision?
29:35 - We're massively underestimating deep learning
36:04 - Deep double descent
41:20 - Backpropagation
42:42 - Can neural networks be made to reason?
50:35 - Long-term memory
56:37 - Language models
1:00:35 - GPT-2
1:07:14 - Active learning
1:08:52 - Staged release of AI systems
1:13:41 - How to build AGI?
1:25:00 - Question to AGI
1:32:07 - Meaning of life | |||
12 May 2020 | #95 – Dawn Song: Adversarial Machine Learning and Computer Security | 02:13:04 | |
Dawn Song is a professor of computer science at UC Berkeley with research interests in security, most recently with a focus on the intersection between computer security and machine learning.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Dawn's Twitter: https://twitter.com/dawnsongtweets
Dawn's Website: https://people.eecs.berkeley.edu/~dawnsong/
Oasis Labs: https://www.oasislabs.com
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
01:53 - Will software always have security vulnerabilities?
09:06 - Human are the weakest link in security
16:50 - Adversarial machine learning
51:27 - Adversarial attacks on Tesla Autopilot and self-driving cars
57:33 - Privacy attacks
1:05:47 - Ownership of data
1:22:13 - Blockchain and cryptocurrency
1:32:13 - Program synthesis
1:44:57 - A journey from physics to computer science
1:56:03 - US and China
1:58:19 - Transformative moment
2:00:02 - Meaning of life | |||
15 May 2020 | #96 – Stephen Schwarzman: Going Big in Business, Investing, and AI | 01:10:44 | |
Stephen Schwarzman is the CEO and Co-Founder of Blackstone, one of the world's leading investment firms with over 530 billion dollars of assets under management. He is one of the most successful business leaders in history, all from humble beginnings back in Philly. I recommend his recent book called What It Takes that tells stories and lessons from this personal journey.
Support this podcast by signing up with these sponsors:
- ExpressVPN at https://www.expressvpn.com/lexpod
- MasterClass: https://masterclass.com/lex
EPISODE LINKS:
What It Takes (book): https://amzn.to/2WX9cZu
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
04:17 - Going big in business
07:34 - How to recognize an opportunity
16:00 - Solving problems that people have
25:26 - Philanthropy
32:51 - Hope for the new College of Computing at MIT
37:32 - Unintended consequences of technological innovation
42:24 - Education systems in China and United States
50:22 - American AI Initiative
59:53 - Starting a business is a rough ride
1:04:26 - Love and family | |||
20 May 2020 | #97 – Sertac Karaman: Robots That Fly and Robots That Drive | 01:23:18 | |
Sertac Karaman is a professor at MIT, co-founder of the autonomous vehicle company Optimus Ride, and is one of top roboticists in the world, including robots that drive and robots that fly.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Sertac's Website: http://sertac.scripts.mit.edu/web/
Sertac's Twitter: https://twitter.com/sertackaraman
Optimus Ride: https://www.optimusride.com/
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
01:44 - Autonomous flying vs autonomous driving
06:37 - Flying cars
10:27 - Role of simulation in robotics
17:35 - Game theory and robotics
24:30 - Autonomous vehicle company strategies
29:46 - Optimus Ride
47:08 - Waymo, Tesla, Optimus Ride timelines
53:22 - Achieving the impossible
53:50 - Iterative learning
58:39 - Is Lidar is a crutch?
1:03:21 - Fast autonomous flight
1:18:06 - Most beautiful idea in robotics | |||
28 May 2020 | #99 – Karl Friston: Neuroscience and the Free Energy Principle | 01:29:29 | |
Karl Friston is one of the greatest neuroscientists in history, cited over 245,000 times, known for many influential ideas in brain imaging, neuroscience, and theoretical neurobiology, including the fascinating idea of the free-energy principle for action and perception.
Support this podcast by signing up with these sponsors:
– Cash App – use code "LexPodcast" and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
EPISODE LINKS:
Karl's Website: https://www.fil.ion.ucl.ac.uk/~karl/
Karl's Wiki: https://en.wikipedia.org/wiki/Karl_J._Friston
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
01:50 - How much of the human brain do we understand?
05:53 - Most beautiful characteristic of the human brain
10:43 - Brain imaging
20:38 - Deep structure
21:23 - History of brain imaging
32:31 - Neuralink and brain-computer interfaces
43:05 - Free energy principle
1:24:29 - Meaning of life | |||
13 Jun 2020 | #101 – Joscha Bach: Artificial Consciousness and the Nature of Reality | 03:00:45 | |
Joscha Bach is the VP of Research at the AI Foundation, previously doing research at MIT and Harvard. Joscha work explores the workings of the human mind, intelligence, consciousness, life on Earth, and the possibly-simulated fabric of our universe.
Support this podcast by supporting these sponsors:
- ExpressVPN at https://www.expressvpn.com/lexpod
- Cash App – use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:14 - Reverse engineering Joscha Bach
10:38 - Nature of truth
18:47 - Original thinking
23:14 - Sentience vs intelligence
31:45 - Mind vs Reality
46:51 - Hard problem of consciousness
51:09 - Connection between the mind and the universe
56:29 - What is consciousness
1:02:32 - Language and concepts
1:09:02 - Meta-learning
1:16:35 - Spirit
1:18:10 - Our civilization may not exist for long
1:37:48 - Twitter and social media
1:44:52 - What systems of government might work well?
1:47:12 - The way out of self-destruction with AI
1:55:18 - AI simulating humans to understand its own nature
2:04:32 - Reinforcement learning
2:09:12 - Commonsense reasoning
2:15:47 - Would AGI need to have a body?
2:22:34 - Neuralink
2:27:01 - Reasoning at the scale of neurons and societies
2:37:16 - Role of emotion
2:48:03 - Happiness is a cookie that your brain bakes for itself | |||
20 Jun 2020 | #102 – Steven Pressfield: The War of Art | 01:27:56 | |
Steven Pressfield is a historian and author of War of Art, a book that had a big impact on my life and the life of millions of whose passion is to create in art, science, business, sport, and everywhere else. I highly recommend it and others of his books on this topic, including Turning Pro, Do the Work, Nobody Wants to Read Your Shit, and the Warrior Ethos. Also his books Gates of Fire about the Spartans and the battle at Thermopylae, The Lion's Gate, Tides of War, and others are some of the best historical fiction novels ever written.
Support this podcast by supporting these sponsors:
- Jordan Harbinger Show: https://jordanharbinger.com/lex/
- Cash App – use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
05:00 - Nature of war
11:43 - The struggle within
17:11 - Love and hate in a time of war
25:17 - Future of warfare
28:31 - Technology in war
30:10 - What it takes to kill a person
32:22 - Mortality
37:30 - The muse
46:09 - Editing
52:19 - Resistance
1:10:41 - Loneliness
1:12:24 - Is a warrior born or trained?
1:13:53 - Hard work and health
1:18:41 - Daily ritual | |||
22 Jun 2020 | #103 – Ben Goertzel: Artificial General Intelligence | 04:09:25 | |
Ben Goertzel is one of the most interesting minds in the artificial intelligence community. He is the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of the Machine Intelligence Research Institute, Chief Scientist of Hanson Robotics, the company that created the Sophia Robot. He has been a central figure in the AGI community for many years, including in the Conference on Artificial General Intelligence.
Support this podcast by supporting these sponsors:
- Jordan Harbinger Show: https://jordanharbinger.com/lex/
- MasterClass: https://masterclass.com/lex
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
OUTLINE:
00:00 - Introduction
03:20 - Books that inspired you
06:38 - Are there intelligent beings all around us?
13:13 - Dostoevsky
15:56 - Russian roots
20:19 - When did you fall in love with AI?
31:30 - Are humans good or evil?
42:04 - Colonizing mars
46:53 - Origin of the term AGI
55:56 - AGI community
1:12:36 - How to build AGI?
1:36:47 - OpenCog
2:25:32 - SingularityNET
2:49:33 - Sophia
3:16:02 - Coronavirus
3:24:14 - Decentralized mechanisms of power
3:40:16 - Life and death
3:42:44 - Would you live forever?
3:50:26 - Meaning of life
3:58:03 - Hat
3:58:46 - Question for AGI |
Enhance your understanding of Lex Fridman Podcast with My Podcast Data
At My Podcast Data, we strive to provide in-depth, data-driven insights into the world of podcasts. Whether you're an avid listener, a podcast creator, or a researcher, the detailed statistics and analyses we offer can help you better understand the performance and trends of Lex Fridman Podcast. From episode frequency and shared links to RSS feed health, our goal is to empower you with the knowledge you need to stay informed and make the most of your podcasting experience. Explore more shows and discover the data that drives the podcast industry.
© My Podcast Data