
80,000 Hours Podcast (Rob, Luisa, and the 80,000 Hours team)
Explorez tous les épisodes de 80,000 Hours Podcast
Date | Titre | Durée | |
---|---|---|---|
16 Dec 2019 | #67 – David Chalmers on the nature and ethics of consciousness | 04:41:50 | |
What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. • Links to learn more, summary and full transcript. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far.
| |||
17 Jun 2019 | #59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable | 01:43:24 | |
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — coauthor of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. • Links to learn more, summary and full transcript. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. Chapters: The 80,000 Hours Podcast is produced by Keiran Harris. | |||
21 Oct 2020 | #86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us | 02:24:54 | |
Had World War 1 never happened, you might never have existed. It’s very unlikely that the exact chain of events that led to your conception would have happened otherwise — so perhaps you wouldn't have been born. Would that mean that it's better for you that World War 1 happened (regardless of whether it was better for the world overall)? On the one hand, if you're living a pretty good life, you might think the answer is yes – you get to live rather than not. On the other hand, it sounds strange to say that it's better for you to be alive, because if you'd never existed there'd be no you to be worse off. But if you wouldn't be worse off if you hadn't existed, can you be better off because you do? In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can't be better for someone to exist vs. not. Links to learn more, summary and full transcript. Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn't better for them, and thus, perhaps, that it's not better at all. This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn't otherwise have existed) — which would affect how we try to make the world a better place. Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned. Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out. This is our second episode with Professor Greaves. The first one was a big hit, so we thought we'd come back and dive into even more complex ethical issues. We discuss: • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible Chapters:
Producer: Keiran Harris. | |||
20 Mar 2021 | #94 – Ezra Klein on aligning journalism, politics, and what matters most | 01:45:21 | |
How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs? When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously? Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there's little pre-existing infrastructure to push them. Links to learn more, summary and full transcript. He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy). To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there's very little infrastructure for thinking about it. There isn't a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on. All of this generates a strong 'path dependence' that can lock the media in to covering less important topics despite having no intention to do so. According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important." One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: "This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.” Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “...like catnip for readers.” Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can't make the audience interested in it, that is your failure — never the audience's failure. But is that really true? In today’s episode we explore that claim, as well as:
• How many hours of news the average person should consume
Producer: Keiran Harris. | |||
28 May 2021 | #101 – Robert Wright on using cognitive empathy to save the world | 01:36:00 | |
In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. took that as confirmation that he had something to hide — probably an active WMD program. But what about alternative explanations? Maybe those scientists knew about past crimes. Or maybe they’d defect. Or maybe giving in to that kind of demand would have humiliated Hussein in the eyes of enemies like Iran and Saudi Arabia. According to today’s guest Robert Wright, host of the popular podcast The Wright Show, these are the kinds of things that might have come up if people were willing to look at things from Saddam Hussein’s perspective. Links to learn more, summary and full transcript. He calls this ‘cognitive empathy’. It's not feeling-your-pain-type empathy — it's just trying to understand how another person thinks. He says if you pitched this kind of thing back in 2003 you’d be shouted down as a 'Saddam apologist' — and he thinks the same is true today when it comes to regimes in China, Russia, Iran, and North Korea. The two Roberts in today’s episode — Bob Wright and Rob Wiblin — agree that removing this taboo against perspective taking, even with people you consider truly evil, could potentially significantly improve discourse around international relations. They feel that if we could spread the meme that if you’re able to understand what dictators are thinking and calculating, based on their country’s history and interests, it seems like we’d be less likely to make terrible foreign policy errors. But how do you actually do that? Bob’s new ‘Apocalypse Aversion Project’ is focused on creating the necessary conditions for solving non-zero-sum global coordination problems, something most people are already on board with. And in particular he thinks that might come from enough individuals “transcending the psychology of tribalism”. He doesn’t just mean rage and hatred and violence, he’s also talking about cognitive biases. Bob makes the striking claim that if enough people in the U.S. had been able to combine perspective taking with mindfulness — the ability to notice and identify thoughts as they arise — then the U.S. might have even been able to avoid the invasion of Iraq. Rob pushes back on how realistic this approach really is, asking questions like:
• Haven’t people been trying to do this since the beginning of time? But despite the differences in approaches, Bob has a lot of common ground with 80,000 Hours — and the result is a fun back-and-forth about the best ways to achieve shared goals. Bob starts by questioning Rob about effective altruism, and they go on to cover a bunch of other topics, such as:
• Specific risks like climate change and new technologies If you're interested to hear more of Bob's interviews you can subscribe to The Wright Show anywhere you're getting this one. You can also watch videos of this and all his other episodes on Bloggingheads.tv.
Producer: Keiran Harris. | |||
08 Nov 2024 | Parenting insights from Rob and 8 past guests | 01:35:39 | |
With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript. After hearing 8 former guests’ insights, Luisa and Rob chat about:
This bonus episode includes excerpts from:
Chapters:
Producer: Keiran Harris | |||
21 Nov 2024 | #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world | 02:22:03 | |
"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going to be really important and shape the future are not in the Overton window, because they sound weird and off-putting and very futuristic. But I think stories are the best way to bring them in." — Elizabeth Cox In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world. Links to learn more, highlights, and full transcript. They cover:
Material you might want to check out before listening:
Chapters:
Producer: Keiran Harris | |||
03 Jan 2022 | #67 Classic episode – David Chalmers on the nature and ethics of consciousness | 04:42:05 | |
Rebroadcast: this episode was originally released in December 2019. What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. Links to learn more, summary and full transcript. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
14 Aug 2023 | #160 – Hannah Ritchie on why it makes sense to be optimistic about the environment | 02:36:42 | |
"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah Ritchie In today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism. Links to learn more, summary and full transcript. They cover:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris | |||
07 Jan 2021 | #73 - Phil Trammell on patient philanthropy and waiting to do good [re-release] | 02:41:06 | |
Rebroadcast: this episode was originally released in March 2020. To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million. This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now. He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science. Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here.
• Links to learn more, summary and full transcript. What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways? And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own. Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse? Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended? Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'. Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good. Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my colleague Howie Lempel, we try to answer that, and also discuss:
• Historical attempts at patient philanthropy Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the transcript linked above.
Producer: Keiran Harris. | |||
29 Jun 2021 | #104 – Pardis Sabeti on the Sentinel system for detecting and stopping pandemics | 02:20:58 | |
When the first person with COVID-19 went to see a doctor in Wuhan, nobody could tell that it wasn’t a familiar disease like the flu — that we were dealing with something new. How much death and destruction could we have avoided if we'd had a hero who could? That's what the last Assistant Secretary of Defense Andy Weber asked on the show back in March. Today’s guest Pardis Sabeti is a professor at Harvard, fought Ebola on the ground in Africa during the 2014 outbreak, runs her own lab, co-founded a company that produces next-level testing, and is even the lead singer of a rock band. If anyone is going to be that hero in the next pandemic — it just might be her. Links to learn more, summary and full transcript. She is a co-author of the SENTINEL proposal, a practical system for detecting new diseases quickly, using an escalating series of three novel diagnostic techniques. The first method, called SHERLOCK, uses CRISPR gene editing to detect familiar viruses in a simple, inexpensive filter paper test, using non-invasive samples. If SHERLOCK draws a blank, we escalate to the second step, CARMEN, an advanced version of SHERLOCK that uses microfluidics and CRISPR to simultaneously detect hundreds of viruses and viral strains. More expensive, but far more comprehensive. If neither SHERLOCK nor CARMEN detects a known pathogen, it's time to pull out the big gun: metagenomic sequencing. More expensive still, but sequencing all the DNA in a patient sample lets you identify and track every virus — known and unknown — in a sample. If Pardis and her team succeeds, our future pandemic potential patient zero may: 1. Go to the hospital with flu-like symptoms, and immediately be tested using SHERLOCK — which will come back negative It's a wonderful vision, and one humanity is ready to test out. But there are all sorts of practical questions, such as: • How do you scale these technologies, including to remote and rural areas? In this conversation Pardis and Rob address all those questions, as well as: • Pardis’ history with trying to control emerging contagious diseases Chapters:
Producer: Keiran Harris. | |||
29 May 2024 | #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems | 02:48:51 | |
"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people. "But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel Glennerster In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
01 Mar 2022 | Introducing 80k After Hours | 00:13:31 | |
Today we're launching a new podcast called 80k After Hours. Like this show it’ll mostly still explore the best ways to do good — and some episodes will be even more laser-focused on careers than most original episodes. But we’re also going to widen our scope, including things like how to solve pressing problems while also living a happy and fulfilling life, as well as releases that are just fun, entertaining or experimental. It’ll feature:
| |||
18 Apr 2024 | #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals | 02:33:12 | |
"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.' I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis Bollard In today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
25 Feb 2025 | #139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value | 03:41:31 | |
A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play? The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount! Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.” Rebroadcast: this episode was originally released in October 2022. Links to learn more, highlights, and full transcript. The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped. We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits. These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good. Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact. Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong. In this conversation, originally released in October 2022, Alan and Rob explore these issues and many others:
Chapters:
Producer: Keiran Harris | |||
22 Jun 2020 | #80 – Stuart Russell on why our approach to AI is broken and how to fix it | 02:13:17 | |
Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed. In his new book, Human Compatible, he outlines the 'standard model' of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we've stated explicitly. This is so obvious it almost doesn't even seem like a design choice, but it is. Unfortunately there's a big problem with this approach: it's incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we've asked it to. That's true even if the goal isn't what we really want, or the methods it's choosing are ones we would never accept. We already see AIs misbehaving for this reason. Stuart points to the example of YouTube's recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn't something we wanted, but it helped achieve the algorithm's objective: maximise viewing time. Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we've asked for. Links to learn more, summary and full transcript. This 'alignment' problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we're ever to hand over much of the economy to thinking machines, we can't count on ourselves correctly saying exactly what we want the AI to do every time. Stuart isn't just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles: 1. The AI system's objective is to achieve what humans want. For instance, a machine built on these principles would be happy to be turned off if that's what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, "you can't fetch the coffee if you're dead." These principles lend themselves towards machines that are modest and cautious, and check in when they aren't confident they're truly achieving what we want. We've made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we've rejected an option because we've considered it and decided it's a bad idea, and when we simply haven't thought about it at all. Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political. When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want? Chapters:
Producer: Keiran Harris. | |||
26 Jul 2024 | #194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government | 03:04:18 | |
"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable. Links to learn more, highlights, video, and full transcript. Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive. Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously. But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions. The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination. Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company. In addition to all of that, host Rob Wiblin and Vitalik discuss:
Chapters:
Producer and editor: Keiran Harris | |||
02 Jun 2023 | #153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work | 02:56:10 | |
GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency. But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth? Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year. Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water. But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently. Links to learn more, summary and full transcript. Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them. An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else. Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as:
Chapters:
Producer: Keiran Harris Audio mastering: Simon Monsour and Ben Cordell Transcriptions: Katy Moore | |||
21 Jan 2021 | #90 – Ajeya Cotra on worldview diversification and how big the future could be | 02:59:05 | |
You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?” You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours. But then you get up, walk outside, and look at the number on your box. ‘3’. Huh. Now you don’t know what to believe. If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928? In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as 'anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving. Links to learn more, summary and full transcript. Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by 'longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future. Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time. But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live. If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed. If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called 'doomsday argument' alone. If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead. There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants. In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely. They also discuss:
• Which worldviews Open Phil finds most plausible, and how it balances them
Producer: Keiran Harris. | |||
12 Oct 2023 | #166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere | 03:08:49 | |
"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum Collins In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
14 Feb 2025 | #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway | 02:44:07 | |
Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through. That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant. Links to learn more, highlights, video, and full transcript. This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up. Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway. But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first. As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild. As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there. Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered. That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary. But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible. Host Rob and Allan also cover:
Chapters:
Video editing: Simon Monsour | |||
19 Mar 2019 | #54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms | 02:53:40 | |
OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm. How is this possible and what does it show? In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems. A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map. When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space. Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcript This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software. The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy. Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2. We discuss: • What are the most significant changes in the AI policy world over the last year or two? Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss: • The reaction to OpenAI's release of GPT-2 Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
23 Oct 2023 | #168 – Ian Morris on whether deep history says we're heading for an intelligence explosion | 02:43:55 | |
"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian Morris In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
03 Apr 2023 | #148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't | 02:17:28 | |
If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting. In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment. Links to learn more, summary and full transcript. Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one. In short: we're uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world. That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don't succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels. In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don't work out? Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage. If you're optimistic about renewables, as Johannes is, then that's all the more reason to relax about scenarios where they work as planned, and focus one's efforts on the possibility that they don't. And Johannes notes that the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn't, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come. In today's extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as: • Retooling newly built coal plants in the developing world Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris | |||
28 Aug 2020 | Global issues beyond 80,000 Hours’ current priorities (Article) | 00:32:54 | |
Today’s release is the latest in our series of audio versions of our articles. In this one, we go through 30 global issues beyond the ones we usually prioritize most highly in our work, and that you might consider focusing your career on tackling. Although we spend the majority of our time at 80,000 Hours on our highest priority problem areas, and we recommend working on them to many of our readers, these are just the most promising issues among those we’ve spent time investigating. There are many other global issues that we haven’t properly investigated, and which might be very promising for more people to work on. In fact, we think working on some of the issues in this article could be as high-impact for some people as working on our priority problem areas — though we haven’t looked into them enough to be confident. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.
Producer: Keiran Harris. | |||
07 Feb 2025 | #124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions | 03:10:21 | |
If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong. Rebroadcast: this episode was originally released in March 2022. Links to learn more, highlights, and full transcript. Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish. First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running. Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries. 'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves. While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing. Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget. In this in-depth conversation, originally released in March 2022, Karen Levy and host Rob Wiblin chat about the above, as well as:
Chapters:
Producer: Keiran Harris | |||
27 Dec 2021 | #59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable | 01:43:05 | |
Rebroadcast: this episode was originally released in June 2019. It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. Links to learn more, summary and full transcript. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:
• How much people misrepresent their views in democratic countries. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
27 Feb 2019 | #53 - Kelsey Piper on the room for important advocacy within journalism | 02:34:31 | |
“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets? Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that. Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work. But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle. They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics. Links to learn more, summary and full transcript. Links to Kelsey's top articles. Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them. Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.” Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics. If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems. Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself. In today’s episode we discuss that path, as well as:
• What’s the day to day life of a Vox journalist like? Rob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss:
• The risk political polarisation poses to long-termist causes Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
01 Aug 2024 | #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them | 02:08:29 | |
"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella Nevo In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them. Links to learn more, highlights, and full transcript. They cover:
Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! Chapters:
| |||
22 Oct 2021 | #114 – Maha Rehman on working with governments to rapidly deliver masks to millions of people | 01:42:55 | |
It’s hard to believe, but until recently there had never been a large field trial that addressed these simple and obvious questions: 1. When ordinary people wear face masks, does it actually reduce the spread of respiratory diseases? It turns out the first question is remarkably challenging to answer, but it's well worth doing nonetheless. Among other reasons, the first good trial of this prompted Maha Rehman — Policy Director at the Mahbub Ul Haq Research Centre — as well as a range of others to immediately use the findings to help tens of millions of people across South Asia, even before the results were public. Links to learn more, summary and full transcript. The groundbreaking Bangladesh RCT that inspired her to take action found that: • A 30% increase in mask wearing reduced total infections by 10%. The research was done by social scientists at Yale, Berkeley, and Stanford, among others. It applied a program they called ‘NORM’ in half of 600 villages in which about 350,000 people lived. NORM has four components, which the researchers expected would work well for the general public: N: no-cost distribution Basically you make sure a community has enough masks and you tell them why it’s important to wear them. You also reinforce the message periodically in markets and mosques, and via role models and promoters in the community itself. Tipped off that these positive findings were on the way, Maha took this program and rushed to put it into action in Lahore, Pakistan, a city with a population of about 13 million, before the Delta variant could sweep through the region. Maha had already been doing a lot of data work on COVID policy over the past year, and that allowed her to quickly reach out to the relevant stakeholders — getting them interested and excited. Governments aren’t exactly known for being super innovative, but in March and April Lahore was going through a very deadly third wave of COVID — so the commissioner quickly jumped on this approach, providing an endorsement as well as resources. Together with the original researchers, Maha and her team at LUMS collected baseline data that allowed them to map the mask-wearing rate in every part of Lahore, in both markets and mosques. And then based on that data, they adapted the original rural-focused model to a very different urban setting. The scale of this project was daunting, and in today’s episode Maha tells Rob all about the day-to-day experiences and stresses required to actually make it happen. They also discuss: • The challenges of data collection in this context Chapters:
Producer: Keiran Harris | |||
21 Feb 2024 | #180 – Hugo Mercier on why gullibility and misinformation are overrated | 02:36:55 | |
The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI. And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies. But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people. Links to learn more, summary, and full transcript. In this interview, host Rob Wiblin and Hugo discuss:
Chapters:
Producer and editor: Keiran Harris | |||
22 Apr 2023 | Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours) | 01:17:28 | |
In this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff. Links to learn more, highlights and full transcript. They cover:
• The evidence for shrimp sentience Who this episode is for:
• People who care about animal welfare Who this episode isn’t for:
• People who think shrimp couldn’t possibly be sentient Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app
Producer: Keiran Harris | |||
19 May 2021 | #100 – Having a successful career with depression, anxiety and imposter syndrome | 02:51:21 | |
Today's episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!). The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it's rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so. Links to learn more, summary and full transcript. The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today. The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort. Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better. Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world. We hope that the episode will: 1. Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles. 2. Give insight into what it's like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully. So we think this episode will be valuable for:
• People who have experienced mental health problems or might in future; In other words, we think this episode could be worthwhile for almost everybody. Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts. If you don’t want to hear the most intense section, you can skip the chapter called ‘Disaster’ (44–57mins). And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’ (1hr 11mins). If you're feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the U.S. (800-273-8255) and Samaritans in the U.K. (116 123).
Producer: Keiran Harris. | |||
19 Mar 2020 | Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help | 01:52:12 | |
From home isolation Rob and Howie just recorded an episode on: 1. How many could die in the crisis, and the risk to your health personally. We have rushed this episode out to share information as quickly as possible in a fast-moving situation. If you would prefer to read you can find the transcript here. We list a wide range of valuable resources and links in the blog post attached to the show (over 60, including links to projects you can join). See our 'problem profile' on global catastrophic biological risks for information on these grave threats and how you can contribute to preventing them. We have also just added a COVID-19 landing page on our site. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. | |||
04 Sep 2024 | #200 – Ezra Karger on what superforecasters and experts think about existential risks | 02:49:24 | |
"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer: Keiran Harris | |||
27 Nov 2024 | #209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit | 01:22:08 | |
One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right? Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal. Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them? That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance. Links to learn more, highlights, video, and full transcript. As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:
But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale). Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars. So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead. Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it? OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff. Chapters:
Producer: Keiran Harris | |||
14 Jun 2022 | #132 – Nova DasSarma on why information security may be critical to the safe development of AI systems | 02:42:27 | |
If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge. Links to learn more, summary and full transcript. The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society. If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately. If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off. As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly. If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world. We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough. In today's conversation, Rob and Nova cover:
• How good or bad is information security today Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
Producer: Keiran Harris | |||
03 Feb 2020 | Rob & Howie on what we do and don't know about 2019-nCoV | 01:18:44 | |
Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus. See this list of resources, including many discussed in the episode, to learn more. In the 1h15m conversation we cover: • What is it? Here's a link to the hygiene advice from Laurie Garrett mentioned in the episode. Recorded 2 Feb 2020. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
19 Sep 2024 | #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science | 02:20:26 | |
"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki Ramakrishnan In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer: Keiran Harris | |||
01 Nov 2023 | #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down | 02:57:46 | |
"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh Harish In today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution. Links to learn more, summary, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
05 May 2023 | #150 – Tom Davidson on how quickly AI could transform the world | 03:01:59 | |
It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from. For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before? You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.” But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird. Links to learn more, summary and full transcript. As a teaser, consider the following: Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world. You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades. But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research. And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves. And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly. To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii. Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now. Wild. Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours. Luisa and Tom also discuss: • How we might go from GPT-4 to AI disaster Chapters:
| |||
01 Feb 2024 | #178 – Emily Oster on what the evidence actually says about pregnancy and parenting | 02:22:36 | |
"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you're doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them. And you can then feel like, 'I've thought about this, and this is a life that I want. This is a life that we're trying to craft for our family, for our kids.' And that is distinct from thinking you're doing a good job in every moment — which you can't achieve. But you can achieve, 'I'm doing this the way that I think works for my family.'" — Emily Oster In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood. Links to learn more, summary, and full transcript. They cover:
Producer and editor: Keiran Harris | |||
23 May 2024 | #188 – Matt Clancy on whether science is good | 02:40:15 | |
"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff. "Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt Clancy In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
12 Feb 2024 | #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety | 02:56:48 | |
Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain. From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool. So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all? Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox. Links to learn more, video, highlights, and full transcript. In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:
Producer and editor: Keiran Harris | |||
26 Mar 2021 | #95 – Kelly Wanser on whether to deliberately intervene in the climate | 01:24:08 | |
How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado. 100 years? 50 years? 20? Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well. Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate. Links to learn more, summary and full transcript. Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have. Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy. After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere. Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them 'whiter' so they reflect even more sunlight back into space. These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter. Kelly says that scientists estimate that we're already lowering the global temperature this way by 0.5–1.1ºC, without even intending to. While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn't been funding to measure how much temperature change you get for a given amount of spray. And we won't want to dive into these methods head first because the atmosphere is a complex system we can't yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied. The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as: • It being riskier than doing nothing Kelly and Rob also talk about: • The many climate interventions that are already happening Chapters: Producer: Keiran Harris. | |||
10 Jul 2023 | #156 – Markus Anderljung on how to regulate cutting-edge AI models | 02:06:36 | |
"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus Anderljung In today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems. Links to learn more, summary and full transcript. They cover:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Technical editing: Simon Monsour and Milo McGuire Transcriptions: Katy Moore | |||
05 Jul 2024 | #191 (Part 2) – Carl Shulman on government and society after AGI | 02:20:32 | |
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order! If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together? It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere. Links to learn more, highlights, and full transcript. As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases. If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it. Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet. To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest. In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI. Carl Shulman and host Rob Wiblin discuss the above, as well as:
Chapters:
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore | |||
13 Feb 2020 | #70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV) | 02:26:33 | |
nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad though it is, it's much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both. Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can't do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all. • Links to learn more, summary and full transcript. This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like. In today's episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University's Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we're to keep the risk at acceptable levels. The ideas are: Science 1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go. Response 4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely. Oversight 9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens. These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem. In the episode Rob and Cassidy also talk about: • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential. Chapters:
Producer: Keiran Harris. | |||
18 Oct 2023 | #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption | 01:54:49 | |
"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that. Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren Kell Links to learn more, summary and full transcript. In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
18 Oct 2021 | #113 – Varsha Venugopal on using gossip to help vaccinate every child in India | 02:05:44 | |
Our failure to make sure all kids globally get all of their basic vaccinations leads to 1.5 million child deaths every year. According to today’s guest, Varsha Venugopal, for the great majority this has nothing to do with weird conspiracy theories or medical worries — in India 80% of undervaccinated children are already getting some shots. They just aren't getting all of them, for the tragically mundane reason that life can get in the way. Links to learn more, summary and full transcript. As Varsha says, we're all sometimes guilty of "valuing our present very differently from the way we value the future", leading to short-term thinking whether about getting vaccines or going to the gym. So who should we call on to help fix this universal problem? The government, extended family, or maybe village elders? Varsha says that research shows the most influential figures might actually be local gossips. In 2018, Varsha heard about the ideas around effective altruism for the first time. By the end of 2019, she’d gone through Charity Entrepreneurship’s strategy incubation program, and quit her normal, stable job to co-found Suvita, a non-profit focused on improving the uptake of immunization in India, which focuses on two models: The first one is intuitive. You collect birth registers, digitize the paper records, process the data, and send out personalised SMS messages to hundreds of thousands of families. The effect size varies depending on the context but these messages usually increase vaccination rates by 8-18%. The second approach is less intuitive and isn't yet entirely understood either. Here’s what happens: Suvita calls up random households and asks, “if there were an event in town, who would be most likely to tell you about it?” In over 90% of the cases, the households gave both the name and the phone number of a local ‘influencer’. And when tracked down, more than 95% of the most frequently named 'influencers' agreed to become vaccination ambassadors. Those ambassadors then go on to share information about when and where to get vaccinations, in whatever way seems best to them. When tested by a team of top academics at the Poverty Action Lab (J-PAL) it raised vaccination rates by 10 percentage points, or about 27%. The advantage of SMS reminders is that they’re easier to scale up. But Varsha says the ambassador program isn’t actually that far from being a scalable model as well. A phone call to get a name, another call to ask the influencer join, and boom — you might have just covered a whole village rather than just a single family. Varsha says that Suvita has two major challenges on the horizon: In this episode, Varsha and Rob talk about making these kinds of high-stakes, high-stress decisions, as well as: Chapters:
Producer: Keiran Harris | |||
29 Dec 2022 | #143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons | 02:40:17 | |
America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially. As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: Chapters:
Producer: Keiran Harris | |||
28 Jul 2021 | #106 – Cal Newport on an industrial revolution for office work | 01:53:27 | |
If you wanted to start a university department from scratch, and attract as many superstar researchers as possible, what’s the most attractive perk you could offer? How about just not needing an email address. According to today's guest, Cal Newport — computer science professor and best-selling author of A World Without Email — it should seem obscene and absurd for a world-renowned vaccine researcher with decades of experience to spend a third of their time fielding requests from HR, building management, finance, and so on. Yet with offices organised the way they are today, nothing could be more natural. Links to learn more, summary and full transcript. But this isn’t just a problem at the elite level — this affects almost all of us. A typical U.S. office worker checks their email 80 times a day, once every six minutes on average. Data analysis by RescueTime found that a third of users checked email or Slack every three minutes or more, averaged over a full work day. Each time that happens our focus is broken, killing our momentum on the knowledge work we're supposedly paid to do. When we lament how much email and chat have reduced our focus and filled our days with anxiety and frenetic activity, we most naturally blame 'weakness of will'. If only we had the discipline to check Slack and email once a day, all would be well — or so the story goes. Cal believes that line of thinking fundamentally misunderstands how we got to a place where knowledge workers can rarely find more than five consecutive minutes to spend doing just one thing. Since the Industrial Revolution, a combination of technology and better organization have allowed the manufacturing industry to produce a hundred-fold as much with the same number of people. Cal says that by comparison, it's not clear that specialised knowledge workers like scientists, authors, or senior managers are *any* more productive than they were 50 years ago. If the knowledge sector could achieve even a tiny fraction of what manufacturing has, and find a way to coordinate its work that raised productivity by just 1%, that would generate on the order of $100 billion globally each year. Since the 1990s, when everyone got an email address and most lost their assistants, that lack of direction has led to what Cal calls the 'hyperactive hive mind': everyone sends emails and chats to everyone else, all through the day, whenever they need something. Cal points out that this is so normal we don't even think of it as a way of organising work, but it is: it's what happens when management does nothing to enable teams to decide on a better way of organising themselves. A few industries have made progress taming the 'hyperactive hive mind'. But on Cal's telling, this barely scratches the surface of the improvements that are possible within knowledge work. And reigning in the hyperactive hive mind won't just help people do higher quality work, it will free them from the 24/7 anxiety that there's someone somewhere they haven't gotten back to. In this interview Cal and Rob also cover: Chapters:
Producer: Keiran Harris | |||
07 Aug 2023 | #159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less | 02:51:20 | |
In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort. Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Links to learn more, summary and full transcript. Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team. Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.
The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.” But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep. Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities. Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:
Producer and editor: Keiran Harris | |||
24 Jan 2020 | #68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities | 03:25:36 | |
You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it? A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die. And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others. • Links to learn more, summary and full transcript. To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person. That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies. As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today. Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise. So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it? This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers. Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here. So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover: • Are, or are we not, living in the most influential time in history? Chapters:
Producer: Keiran Harris. | |||
15 Aug 2024 | #196 – Jonathan Birch on the edge cases of sentience and why they matter | 02:01:50 | |
"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan Birch In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!) Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
04 Apr 2025 | #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway | 02:16:03 | |
Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years. So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier. Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options. Links to learn more, highlights, video, and full transcript. As Buck puts it: "Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem." Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose. Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:
Chapters:
This episode was originally recorded on February 21, 2025. Video: Simon Monsour and Luke Monsour | |||
13 Aug 2020 | #84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked | 02:58:14 | |
When COVID-19 struck the US, everyone was told that hand sanitizer needed to be saved for healthcare professionals, so they should just wash their hands instead. But in India, many homes lack reliable piped water, so they had to do the opposite: distribute hand sanitizer as widely as possible. American advocates for banning single-use plastic straws might be outraged at the widespread adoption of single-use hand sanitizer sachets in India. But the US and India are very different places, and it might be the only way out when you're facing a pandemic without running water. According to today’s guest, Shruti Rajagopalan, Senior Research Fellow at the Mercatus Center at George Mason University, that's typical and context is key to policy-making. This prompted Shruti to propose a set of policy responses designed for India specifically back in April. Unfortunately she thinks it's surprisingly hard to know what one should and shouldn't imitate from overseas. Links to learn more, summary and full transcript. For instance, some places in India installed shared handwashing stations in bus stops and train stations, which is something no developed country would advise. But in India, you can't necessarily wash your hands at home — so shared faucets might be the lesser of two evils. (Though note scientists have downgraded the importance of hand hygiene lately.) Stay-at-home orders offer a more serious example. Developing countries find themselves in a serious bind that rich countries do not. With nearly no slack in healthcare capacity, India lacks equipment to treat even a small number of COVID-19 patients. That suggests strict controls on movement and economic activity might be necessary to control the pandemic. But many people in India and elsewhere can't afford to shelter in place for weeks, let alone months. And governments in poorer countries may not be able to afford to send everyone money — even where they have the infrastructure to do so fast enough. India ultimately did impose strict lockdowns, lasting almost 70 days, but the human toll has been larger than in rich countries, with vast numbers of migrant workers stranded far from home with limited if any income support. There were no trains or buses, and the government made no provision to deal with the situation. Unable to afford rent where they were, many people had to walk hundreds of kilometers to reach home, carrying children and belongings with them. But in some other ways the context of developing countries is more promising. In the US many people melted down when asked to wear facemasks. But in South Asia, people just wore them. Shruti isn’t sure whether that's because of existing challenges with high pollution, past experiences with pandemics, or because intergenerational living makes the wellbeing of others more salient, but the end result is that masks weren’t politicised in the way they were in the US. In addition, despite the suffering caused by India's policy response to COVID-19, public support for the measures and the government remains high — and India's population is much younger and so less affected by the virus. In this episode, Howie and Shruti explore the unique policy challenges facing India in its battle with COVID-19, what they've tried to do, and how it has gone. They also cover: • What an economist can bring to the table during a pandemic Chapters: Producer: Keiran Harris. | |||
15 Apr 2020 | Article: Reducing global catastrophic biological risks | 01:04:15 | |
In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute. Greg also wrote a new problem profile on that topic for our website, and reading that is a good lead-in to our interview with him. So in a bit of an experiment we decided to make this audio version of that article, narrated by the producer of the 80,000 Hours Podcast, Keiran Harris. We’re thinking about having audio versions of other important articles we write, so it’d be great if you could let us know if you’d like more of these. You can email us your view at podcast@80000hours.org. If you want to check out all of Greg’s graphs and footnotes that we didn’t include, and get links to learn more about GCBRs - you can find those here. And if you want to read more about COVID-19, the 80,000 Hours team has produced a fantastic package of 10 pieces about how to stop the pandemic. You can find those here. | |||
10 Feb 2025 | AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out | 03:12:24 | |
Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI? With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023. Check out the full transcript on the 80,000 Hours website. You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:
And of course, Rob and Luisa also regularly chime in on what they agree and disagree with. Chapters:
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong | |||
01 May 2017 | #0 – Introducing the 80,000 Hours Podcast | 00:03:54 | |
80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #17 – Will MacAskill on why our descendants might view us as moral monsters • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter | |||
10 Jan 2022 | #35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission | 01:23:34 | |
Rebroadcast: this episode was originally released in June 2018. How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure. That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator. In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. Links to learn more, summary and full transcript. People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face. But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms. We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’, as well as:
• Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
02 Mar 2020 | #71 - Benjamin Todd on the key ideas of 80,000 Hours | 02:57:29 | |
The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible. Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift. All of us added something to it, but the single biggest contributor was our CEO and today's guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012. This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we've discovered since we started investigating high impact careers. • Links to learn more, summary and full transcript. But it's perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words. Fortunately it's designed to be highly modular and it's easy to work through it over multiple sessions, scanning over the articles it links to on each topic. Perhaps though, you'd prefer to absorb our most essential ideas in conversation form, in which case this episode is for you. If you want to have a big impact with your career, and you say you're only going to read one article from us, we recommend you read our key ideas page. And likewise, if you're only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through:
• Common misunderstandings of our advice One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we're least sure about, or didn’t yet cover within the article. Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we're aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page! Get the episode by subscribing: type 80,000 Hours into your podcasting app.
Producer: Keiran Harris. | |||
05 Apr 2022 | #126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs | 02:15:16 | |
Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't. Incredible though it might seem, according to today's guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children's lives play out once they're adults. Links to learn more, summary and full transcript. Of course, kids do resemble their parents. But just as we probably can't say it was attentive parenting that gave me my mother's nose, perhaps we can't say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can't really distinguish the impact of one from the other. But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you've got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise. The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it's differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment” take second place. Parenting style does matter for something, but it comes in a clear third. Bryan is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting. First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity). Second, parents can and do influence you quite a lot — so long as you're young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot. Third, this research only studies variation in parenting behaviour that was common among the families studied. And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes. But the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview Rob interrogates whether Bryan can really be right, or whether the research he's drawing on has taken a wrong turn somewhere. And that's just one topic we cover, some of the others being: • People’s biggest misconceptions about the labour market Chapters:
Producer: Keiran Harris | |||
07 Mar 2020 | #72 - Toby Ord on the precipice and humanity's potential futures | 03:14:17 | |
This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century. I loved the book and learned a great deal from it (buy it here, US and audiobook release March 24). While preparing for this interview I copied out 87 facts that were surprising, shocking or important. Here's a sample of 16: 1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined. 2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s. 3. In 2008 a 'gamma ray burst' reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren't sure what generates gamma ray bursts but one cause may be two neutron stars colliding. 4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped… N.B. I've had to cut off this list as we only get 4,000 characters in these show notes, so: Click here to read the whole list, see a full transcript, and find related links. And if you like the list, you can get a free copy of the introduction and first chapter by joining our mailing list. While I've been studying these topics for years and known Toby for the last eight, a remarkable amount of what's in The Precipice was new to me. Of course the book isn't a series of isolated amusing facts, but rather a systematic review of the many ways humanity's future could go better or worse, how we might know about them, and what might be done to improve the odds. And that's how we approach this conversation, first talking about each of the main threats, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved. Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which Arden Koehler and I barely even had to work for. Some topics Arden and I ask about include:
• What Toby changed his mind about while writing the book Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript.
Producer: Keiran Harris. | |||
28 Oct 2022 | #139 – Alan Hájek on puzzles and paradoxes in probability and expected value | 03:38:26 | |
A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play? The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount! Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.” Links to learn more, summary and full transcript. The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped. We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits. These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good. Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact. Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong. In today's conversation, Alan and Rob explore these issues and many others: • Simple rules of thumb for having philosophical insights Chapters:
| |||
02 Oct 2023 | #164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives | 03:03:42 | |
"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin Esvelt In today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons. Links to learn more, summary and full transcript. They cover:
Producer and editor: Keiran Harris | |||
13 Dec 2022 | #141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well | 02:44:19 | |
Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer. However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable. Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve. We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter. In today's conversation we discuss the above, as well as:
• Could speeding up AI development be a bad thing? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
Producer: Keiran Harris | |||
23 Apr 2019 | #57 – Tom Kalil on how to do the most good in government | 02:50:16 | |
You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible - before you quit or get kicked out? That was the challenge put in front of Tom Kalil in 1993. He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things. But not everyone figures out how to move the needle. In today's interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in. Links to learn more, summary and full transcript. Interested in US AI policy careers? Apply for one-on-one career advice here. Vacancies at the Center for Security and Emerging Technology. Our high-impact job board, which features other related opportunities. He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren't; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored. Over years at the White House Office of Science and Technology Policy, 'Team Kalil' built up a white board of principles. For example, 'the schedule is your friend': setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate. Or 'talk to who owns the paper'. People would wonder how Tom could get so many lines into the President's speeches. The answer was "figure out who's writing the speech, find them with the document, and tell them to add the line." Obvious, but not something most were doing. Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person. In today's episode we get down to nuts & bolts, and discuss: Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
28 Apr 2022 | #128 – Chris Blattman on the five reasons wars happen | 02:46:51 | |
In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great. Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out. The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today's episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they've learned. Links to learn more, summary and full transcript. Chris's first point is that while organised violence may feel like it's all around us, it's actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace. In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn't — so they can see what a healthy society looks like and what's missing in the places where war does take hold. Chris argues that social scientists have generated five cogent models of when war can be 'rational' for both sides of a conflict: 1. Unchecked interests — such as national leaders who bear few of the costs of launching a war. In today's interview, we walk through how each of the five explanations work and what specific wars or actions they might explain. In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity). The interview also covers: • What Chris and Rob got wrong about the war in Ukraine Chapters:
Producer: Keiran Harris | |||
22 Aug 2024 | #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task | 02:29:26 | |
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough? That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious. Links to learn more, highlights, video, and full transcript. As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way. As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise. Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he’s found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them. In addition to all of that, Nick and Rob talk about:
And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org. Chapters:
Producer and editor: Keiran Harris | |||
07 Mar 2025 | Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui) | 00:36:50 | |
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment. As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.) And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.” But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action. And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place. This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that. And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour. Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable. This episode was originally recorded on March 6, 2025. Chapters:
Video editing: Simon Monsour | |||
12 May 2023 | #151 – Ajeya Cotra on accidentally teaching AI models to deceive us | 02:49:40 | |
Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job — for all sorts of reasons. Today's guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods. Links to learn more, summary and full transcript. As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you're monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it. Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky! Can't we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won't work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:
And according to Ajeya, there are also ways we could end up actively selecting for motivations that we don't want. In today's interview, Ajeya and Rob discuss the above, as well as:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Audio mastering: Ryan Kessler and Ben Cordell Transcriptions: Katy Moore | |||
15 Apr 2019 | #56 - Persis Eskander on wild animal welfare and what, if anything, to do about it | 02:57:58 | |
Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right? Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems. But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns. Links to learn more, summary and full transcript. Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death. But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare? For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours. In today’s interview we explore wild animal welfare as a new field of research, and discuss:
• Do we have a moral duty towards wild animals or not? Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss:
• The importance of figuring out your values Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
01 Sep 2023 | #162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI | 00:59:34 | |
Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT. But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us. Links to learn more, summary and full transcript. On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so. And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead. In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies: 1. Developing an Apollo programme for technical AI safety As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. Rob and Mustafa discuss the above, as well as:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris | |||
18 Jan 2022 | #43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines | 02:35:28 | |
Rebroadcast: this episode was originally released in September 2018. In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”. Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked. Links to learn more, summary and full transcript. The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today. If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere. As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity. You might think the United States would have a more sensible nuclear launch policy. You’d be wrong. As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth. The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe. The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival. Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it. Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity. Strategically, the setup is stupid. Ethically, it is monstrous. So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization? Daniel explores these questions eloquently and urgently in his book. Today we cover:
• Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
20 Oct 2021 | We just put up a new compilation of ten core episodes of the show | 00:03:02 | |
We recently launched a new podcast feed that might be useful to you and people you know. It's called Effective Altruism: Ten Global Problems, and it's a collection of ten top episodes of this show, selected to help listeners quickly get up to speed on ten pressing problems that the effective altruism community is working to solve. It's a companion to our other compilation Effective Altruism: An Introduction, which explores the big picture debates within the community and how to set priorities in order to have the greatest impact.These ten episodes cover:
The selection is ideal for people who are completely new to the effective altruist way of thinking, as well as those who are familiar with effective altruism but new to The 80,000 Hours Podcast. If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to. You can find it by searching for effective altruism in whatever podcasting app you use, or by going to 80000hours.org/ten. We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing podcast@80000hours.org. | |||
08 Sep 2023 | #163 – Toby Ord on the perils of maximising the good that you do | 03:07:08 | |
Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more? But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.” Links to learn more, summary and full transcript. Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes. Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things. This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible. Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects. But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before. To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids. Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff. The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error. As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data. In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world. Toby and Rob also discuss:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript. Producer and editor: Keiran Harris | |||
05 May 2021 | #98 – Christian Tarsney on future bias and a possible solution to moral fanaticism | 02:38:22 | |
Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience. You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.” So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour. Which patient would you rather be? Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future. Christian Tarsney, a philosopher at Oxford University's Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences. Links to learn more, summary and full transcript. That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past? One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it. But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn't care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about! Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven't played yet are still on the way. If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction. It’s a live debate that’s playing out in the philosophy of time, as well as in physics. For Christian, there are two big practical implications of these past, present, and future ethical comparison cases. The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people's past goals, including the goals of people who are now dead. The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born? Christian and Rob also cover several other big topics, including: • A possible solution to moral fanaticism Chapters: Producer: Keiran Harris. | |||
03 Oct 2024 | #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation | 01:25:09 | |
"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it. "The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process." —Peter Godfrey-Smith In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World. Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer: Keiran Harris | |||
09 Mar 2022 | #122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising | 01:36:26 | |
One of 80,000 Hours' main services is our free one-on-one careers advising, which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of people -- including many regular listeners to this show -- about how they might be able to do more good while also having a highly motivating career. Before joining 80,000 Hours, Michelle Hutchinson completed a PhD in Philosophy at Oxford University and helped launch Oxford's Global Priorities Institute, while Habiba Islam studied politics, philosophy, and economics at Oxford University and qualified as a barrister. Links to learn more, summary and full transcript. In this conversation, they cover many topics that recur in their advising calls, and what they've learned from watching advisees’ careers play out: • What they say when advisees want to help solve overpopulation The episode is split into two parts: the first section on The 80,000 Hours Podcast, and the second on our new show 80k After Hours. This is a shameless attempt to encourage listeners to our first show to subscribe to our second feed. That second part covers: • Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour Chapters:
We’ve helped thousands of people formulate their plans and put them in touch with mentors. We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. If you're a regular listener to the show we're especially likely to want to speak with you. Learn about and apply for advising. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris | |||
31 Dec 2023 | 2023 Mega-highlights Extravaganza | 01:53:43 | |
Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came out in 2023. That's 32 of our favourite ideas packed into one episode that's so bursting with substance it might be more than the human mind can safely handle. There's something for everyone here:
...plus another 23 such gems. And they're in an order that our audio engineer Simon Monsour described as having an "eight-dimensional-tetris-like rationale." I don't know what the hell that means either, but I'm curious to find out. And remember: if you like these highlights, note that we release 20-minute highlights reels for every new episode over on our sister feed, which is called 80k After Hours. So even if you're struggling to make time to listen to every single one, you can always get some of the best bits of our episodes. We hope for all the best things to happen for you in 2024, and we'll be back with a traditional classic episode soon. This Mega-highlights Extravaganza was brought to you by Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong | |||
08 Jan 2020 | #33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war | 01:25:11 | |
Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan in episode #32, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including:
• Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
12 May 2020 | Article: Ways people trying to do good accidentally make things worse, and how to avoid them | 00:26:46 | |
Today’s release is the second experiment in making audio versions of our articles. The first was a narration of Greg Lewis’ terrific problem profile on ‘Reducing global catastrophic biological risks’, which you can find on the podcast feed just before episode #74 - that is, our interview with Greg about the piece. If you want to check out the links in today’s article, you can find those here. And if you have feedback on these, positive or negative, it’d be great if you could email us at podcast@80000hours.org. | |||
13 Jan 2021 | Rob Wiblin on self-improvement and research ethics | 02:30:37 | |
This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin. Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own. Among other things they cover:
• Is trying to become a better person a good strategy for self-improvement If you like this go ahead and subscribe to Spencer's show by searching for Clearer Thinking in your podcasting app. In particular, you might want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
08 Jan 2024 | #112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications | 03:50:30 | |
Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation. But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster. According to Carl Shulman, research associate at Oxford University’s Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future. Rebroadcast: this episode was originally released in October 2021. Links to learn more, summary, and full transcript. The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:
This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein. If the case is clear enough, why hasn’t it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve? Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds. Carl suspects another reason is that it’s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn’t know what good performance looks like, politicians can’t be given incentives to do the right thing. It’s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe. But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we’ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended. Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we’ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on. Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:
Producer: Keiran Harris | |||
15 Aug 2022 | #136 – Will MacAskill on what we owe the future | 02:54:37 | |
This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism Chapters:
Producer: Keiran Harris | |||
24 Jul 2023 | #157 – Ezra Klein on existential risk from AI and what DC could do about it | 01:18:46 | |
In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that. In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created. Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years. Links to learn more, summary and full transcript. Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable. Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work. By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research. From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs. In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as: They cover:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below. Producer: Keiran Harris Technical editing: Milo McGuire Transcriptions: Katy Moore | |||
29 Oct 2020 | How much does a vote matter? (Article) | 00:31:14 | |
Today’s release is the latest in our series of audio versions of our articles. In this one — How much does a vote matter? — I investigate the two key things that determine the impact of your vote:
• The chances of your vote changing an election’s outcome I then discuss what I think are the best arguments against voting in important elections:
• If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake. Finally, I look into the impact of donating to campaigns or working to ‘get out the vote’, which can be effective ways to generate additional votes for your preferred candidate. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. | |||
12 Feb 2025 | Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui) | 00:57:29 | |
On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission. For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight. As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice. Links to learn more, highlights, video, and full transcript. The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal. OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs. The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal. But as Rose and Rob lay out, it’s not clear any of those things is actually true. In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover:
Chapters:
Video editing: Simon Monsour | |||
11 Mar 2025 | #213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared | 03:57:36 | |
The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years. That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years. Links to learn more, highlights, video, and full transcript. The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years. Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist. What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed. In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:
Chapters:
Video editing: Simon Monsour | |||
17 Jul 2019 | #61 - Helen Toner on emerging technology, national security, and China | 01:54:57 | |
From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did. Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century. In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances. How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace.
• Links to learn more, summary and full transcript Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed. Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day. Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands. But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy? In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China. We cover:
• Why immigration is the main policy area that should be affected by AI advances today. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. | |||
19 Nov 2021 | #116 – Luisa Rodriguez on why global catastrophes seem unlikely to kill us all | 03:45:44 | |
If modern human civilisation collapsed — as a result of nuclear war, severe climate change, or a much worse pandemic than COVID-19 — billions of people might die. That's terrible enough to contemplate. But what’s the probability that rather than recover, the survivors would falter and humanity would actually disappear for good? It's an obvious enough question, but very few people have spent serious time looking into it -- possibly because it cuts across history, economics, and biology, among many other fields. There's no Disaster Apocalypse Studies department at any university, and governments have little incentive to plan for a future in which their country probably no longer even exists. The person who may have spent the most time looking at this specific question is Luisa Rodriguez — who has conducted research at Rethink Priorities, Oxford University's Future of Humanity Institute, the Forethought Foundation, and now here, at 80,000 Hours. Links to learn more, summary and full transcript. She wrote a series of articles earnestly trying to foresee how likely humanity would be to recover and build back after a full-on civilisational collapse. There are a couple of main stories people put forward for how a catastrophe like this would kill every single human on Earth — but Luisa doesn’t buy them. Story 1: Nuclear war has led to nuclear winter. There's a 10-year period during which a lot of the world is really inhospitable to agriculture. The survivors just aren't able to figure out how to feed themselves in the time period, so everyone dies of starvation or cold. Why Luisa doesn’t buy it: Catastrophes will almost inevitably be non-uniform in their effects. If 80,000 people survive, they’re not all going to be in the same city — it would look more like groups of 5,000 in a bunch of different places. People in some places will starve, but those in other places, such as New Zealand, will be able to fish, eat seaweed, grow potatoes, and find other sources of calories. It’d be an incredibly unlucky coincidence if the survivors of a nuclear war -- likely spread out all over the world -- happened to all be affected by natural disasters or were all prohibitively far away from areas suitable for agriculture (which aren’t the same areas you’d expect to be attacked in a nuclear war). Story 2: The catastrophe leads to hoarding and violence, and in addition to people being directly killed by the conflict, it distracts everyone so much from the key challenge of reestablishing agriculture that they simply fail. By the time they come to their senses, it’s too late -- they’ve used up too much of the resources they’d need to get agriculture going again. Why Luisa doesn’t buy it: We‘ve had lots of resource scarcity throughout history, and while we’ve seen examples of conflict petering out because basic needs aren’t being met, we’ve never seen the reverse. And again, even if this happens in some places -- even if some groups fought each other until they literally ended up starving to death — it would be completely bizarre for it to happen to every group in the world. You just need one group of around 300 people to survive for them to be able to rebuild the species. In this wide-ranging and free-flowing conversation, Luisa and Rob also cover: • What the world might actually look like after one of these catastrophes Chapters:
Producer: Keiran Harris | |||
04 Jan 2024 | #111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms | 03:22:17 | |
If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines. The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft. They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here? According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil. Rebroadcast: this episode was originally released in September 2021. Links to learn more, summary, and full transcript. In today’s episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world. Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country’s rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us. The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected. Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. To root out fraud, aid agencies try to impose institutions and laws that work in countries like the U.K. today. Everyone nods their heads and appears to go along, but years later they find nothing has changed, or worse — the new anti-corruption laws are mostly just used to persecute anyone who challenges the country’s rulers. As Mushtaq explains, to people who specialise in understanding why corruption is ubiquitous in some countries but not others, this is entirely predictable. Western agencies imagine a situation where most people are law abiding, but a handful of selfish fat cats are engaging in large-scale graft. In fact in the countries they’re trying to change everyone is breaking some rule or other, or participating in so-called ‘corruption’, because it’s the only way to get things done and always has been. Mushtaq’s rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they’re participating in, they almost always win out. To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers. Trying to impose a new way of doing things from the top down wasn’t how Europe modernised, and it won’t work elsewhere either. In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption. In this extensive interview Rob and Mushtaq cover this and much more, including:
Producer: Keiran Harris | |||
14 May 2024 | #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard" | 03:06:47 | |
"Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right? "And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity." — Zach Weinersmith In today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through? Links to learn more, highlights, and full transcript. They cover:
Chapters:
Producer and editor: Keiran Harris | |||
15 Jan 2025 | #134 Classic episode – Ian Morris on what big-picture history teaches us | 03:40:53 | |
Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs. Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women. Why such big systematic changes — and why these changes specifically? That's the question bestselling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years. Rebroadcast: this episode was originally released in July 2022. Links to learn more, highlights, and full transcript. There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer? In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels. On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength. There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another. Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career. In this classic episode, we discuss all of Ian's major books. Chapters:
Producer: Keiran Harris | |||
16 Sep 2019 | Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you. | 00:03:39 | |
1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's new, every 2 weeks or so. 5. Or follow our pages on Facebook and Twitter. —— Once a year 80,000 Hours runs a survey to find out whether we've helped our users have a larger social impact with their life and career. We and our donors need to know whether our services, like this podcast, are helping people enough to continue them or scale them up, and it's only by hearing from you that we can make these decisions in a sensible way. So, if 80,000 Hours' podcast, job board, articles, headhunting, advising or other projects have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how. You can also let us know where we've fallen short, which helps us fix problems with what we're doing. We've refreshed the survey this year, hopefully making it easier to fill out than in the past. We'll keep this appeal up for about two weeks, but if you fill it out now that means you definitely won't forget! Thanks so much, and talk to you again in a normal episode soon. — Rob Tag for internal use: this RSS feed is originating in BackTracks. | |||
08 Aug 2022 | #135 – Samuel Charap on key lessons from five months of war in Ukraine | 00:54:47 | |
After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we're in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over. So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia. Links to learn more, summary and full transcript. As Sam lays out, Russia controls much of Ukraine's east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter. Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict. In today's brisk conversation, Rob and Sam cover the following topics: • Current territorial control and the level of attrition within Russia’s and Ukraine's military forces. Chapters:
Producer: Keiran Harris | |||
04 Feb 2025 | If digital minds could suffer, how would we ever know? (Article) | 01:14:30 | |
“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world. Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way. But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:
And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering. We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise. This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem. You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website. Chapters:
| |||
11 Apr 2025 | Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys | 01:47:10 | |
"We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that’s where we run into trouble." —Hannah Boettcher What happens when your desire to do good starts to undermine your own wellbeing? Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way. Check out the full transcript and links to learn more: https://80k.info/mh If you’re dealing with your own mental health concerns, here are some resources that might help:
Chapters:
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong | |||
31 Dec 2019 | #17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster | 01:52:39 | |
Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed? • Full transcript, key points & links to articles discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism (EA) community. In this interview we discuss a wide range of topics: • How would we go about a ‘long reflection’ to fix our moral errors? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris. |