Beta

Explorez tous les épisodes de 80,000 Hours Podcast

Plongez dans la liste complète des épisodes de 80,000 Hours Podcast. Chaque épisode est catalogué accompagné de descriptions détaillées, ce qui facilite la recherche et l'exploration de sujets spécifiques. Suivez tous les épisodes de votre podcast préféré et ne manquez aucun contenu pertinent.

Rows per page:

1–50 of 283

DateTitreDurée
16 Dec 2019#67 – David Chalmers on the nature and ethics of consciousness04:41:50

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience.

Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off.

The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem':

"Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?"

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

Links to learn more, summary and full transcript.
Advice on how to read our advice.
Anonymous answers on: bad habits, risk and failure.

Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain.

Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. 

This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. 

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? 

Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. 
 
Chapters:

  • Rob's intro (00:00:00)
  • The interview begins (00:02:11)
  • Philosopher’s survey (00:06:37)
  • Free will (00:13:37)
  • Survey correlations (00:20:06)
  • Progress in philosophy (00:35:01)
  • Simulations (00:51:30)
  • The problem of consciousness (01:13:01)
  • Dualism and panpsychism (01:26:52)
  • Is consciousness an illusion? (01:34:52)
  • Idealism (01:43:13)
  • Integrated information theory (01:51:08)
  • Moral status and consciousness (02:06:10)
  • Higher order views of consciousness (02:11:46)
  • The views of philosophers on eating meat (02:20:23)
  • Artificial consciousness (02:34:25)
  • The zombie and vulcan trolley problems (02:38:43)
  • Illusionism and moral status (02:56:12)
  • Panpsychism and moral status (03:06:19)
  • Mind uploading (03:15:58)
  • Personal identity (03:22:51)
  • Virtual reality and the experience machine (03:28:56)
  • Singularity (03:42:44)
  • AI alignment (04:07:39)
  • Careers in academia (04:23:37)
  • Having fun disagreements (04:32:54)
  • Rob’s outro (04:42:14)


 Producer: Keiran Harris.

17 Jun 2019#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable01:43:24

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.

The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.

In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.

How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?

Sunstein — coauthor of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.

He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.

Links to learn more, summary and full transcript.
80,000 Hours Annual Review 2018.
How to donate to 80,000 Hours.

In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions.

According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case.

In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:

• How much people misrepresent their views in democratic countries.
• Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
• When is it justified to encourage your own group to polarise?
• Sunstein's difficult experiences as a pioneer of animal rights law.
• Whether activists can do better by spending half their resources on public opinion surveys.
• Should people be more or less outspoken about their true views?
• What might be the next social revolution to take off?
• How can we learn about social movements that failed and disappeared?
• How to find out what people really think.

Chapters:
• Rob’s intro (00:00:00)
• Cass's Harvard lecture on How Change Happens (00:02:59)
• Rob & Cass's conversation about the book (00:41:43)

The 80,000 Hours Podcast is produced by Keiran Harris.

21 Oct 2020#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us02:24:54

Had World War 1 never happened, you might never have existed.

 It’s very unlikely that the exact chain of events that led to your conception would have happened otherwise — so perhaps you wouldn't have been born.

Would that mean that it's better for you that World War 1 happened (regardless of whether it was better for the world overall)?

On the one hand, if you're living a pretty good life, you might think the answer is yes – you get to live rather than not.

On the other hand, it sounds strange to say that it's better for you to be alive, because if you'd never existed there'd be no you to be worse off. But if you wouldn't be worse off if you hadn't existed, can you be better off because you do?

In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can't be better for someone to exist vs. not.

Links to learn more, summary and full transcript.

Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn't better for them, and thus, perhaps, that it's not better at all.

This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn't otherwise have existed) — which would affect how we try to make the world a better place.

Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned.

Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out.

This is our second episode with Professor Greaves. The first one was a big hit, so we thought we'd come back and dive into even more complex ethical issues.

We discuss:

• The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible
• What it means for us to be 'clueless' about the consequences of our actions
• Moral uncertainty -- what we should do when we don't know which moral theory is correct
• Whether we should take a bet on a really small probability of a really great outcome
• The field of global priorities research at the Global Priorities Institute and beyond

Chapters:

  • The interview begins (00:02:53)
  • The Case for Strong Longtermism (00:05:49)
  • Compatible moral views (00:20:03)
  • Defining cluelessness (00:39:26)
  • Why cluelessness isn’t an objection to longtermism (00:51:05)
  • Theories of what to do under moral uncertainty (01:07:42)
  • Pascal’s mugging (01:16:37)
  • Comparing Existence and Non-Existence (01:30:58)
  • Philosophers who reject existence comparativism (01:48:56)
  • Lives framework (02:01:52)
  • Global priorities research (02:09:25)


 Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
 Audio mastering: Ben Cordell.
 Transcriptions: Zakee Ulhaq.

20 Mar 2021#94 – Ezra Klein on aligning journalism, politics, and what matters most01:45:21
How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs?

When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously?

Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there's little pre-existing infrastructure to push them.

Links to learn more, summary and full transcript.

He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy).

To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there's very little infrastructure for thinking about it. There isn't a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on.

All of this generates a strong 'path dependence' that can lock the media in to covering less important topics despite having no intention to do so.

According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important."

One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: "This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.”

Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “...like catnip for readers.”

Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can't make the audience interested in it, that is your failure — never the audience's failure.

But is that really true? In today’s episode we explore that claim, as well as:

• How many hours of news the average person should consume
• Where the progressive movement is failing to live up to its values
• Why Ezra thinks 'price gouging' is a bad idea
• Where the FDA has failed on rapid at-home testing for COVID-19
• Whether we should be more worried about tail-risk scenarios
• And his biggest critiques of the effective altruism community

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

28 May 2021#101 – Robert Wright on using cognitive empathy to save the world01:36:00

In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. took that as confirmation that he had something to hide — probably an active WMD program.

But what about alternative explanations? Maybe those scientists knew about past crimes. Or maybe they’d defect. Or maybe giving in to that kind of demand would have humiliated Hussein in the eyes of enemies like Iran and Saudi Arabia.

According to today’s guest Robert Wright, host of the popular podcast The Wright Show, these are the kinds of things that might have come up if people were willing to look at things from Saddam Hussein’s perspective.

Links to learn more, summary and full transcript.

He calls this ‘cognitive empathy’. It's not feeling-your-pain-type empathy — it's just trying to understand how another person thinks.

He says if you pitched this kind of thing back in 2003 you’d be shouted down as a 'Saddam apologist' — and he thinks the same is true today when it comes to regimes in China, Russia, Iran, and North Korea.

The two Roberts in today’s episode — Bob Wright and Rob Wiblin — agree that removing this taboo against perspective taking, even with people you consider truly evil, could potentially significantly improve discourse around international relations.

They feel that if we could spread the meme that if you’re able to understand what dictators are thinking and calculating, based on their country’s history and interests, it seems like we’d be less likely to make terrible foreign policy errors.

But how do you actually do that?

Bob’s new ‘Apocalypse Aversion Project’ is focused on creating the necessary conditions for solving non-zero-sum global coordination problems, something most people are already on board with.

And in particular he thinks that might come from enough individuals “transcending the psychology of tribalism”. He doesn’t just mean rage and hatred and violence, he’s also talking about cognitive biases.

Bob makes the striking claim that if enough people in the U.S. had been able to combine perspective taking with mindfulness — the ability to notice and identify thoughts as they arise — then the U.S. might have even been able to avoid the invasion of Iraq.

Rob pushes back on how realistic this approach really is, asking questions like:

• Haven’t people been trying to do this since the beginning of time?
• Is there a great novel angle that will change how a lot of people think and behave?
• Wouldn’t it be better to focus on a much narrower task, like getting more mindfulness and meditation and reflectiveness among the U.S. foreign policy elite?

But despite the differences in approaches, Bob has a lot of common ground with 80,000 Hours — and the result is a fun back-and-forth about the best ways to achieve shared goals.

Bob starts by questioning Rob about effective altruism, and they go on to cover a bunch of other topics, such as:

• Specific risks like climate change and new technologies
• How to achieve social cohesion
• The pros and cons of society-wide surveillance
• How Rob got into effective altruism

If you're interested to hear more of Bob's interviews you can subscribe to The Wright Show anywhere you're getting this one. You can also watch videos of this and all his other episodes on Bloggingheads.tv.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

08 Nov 2024Parenting insights from Rob and 8 past guests01:35:39

With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them.

Links to learn more and full transcript.

After hearing 8 former guests’ insights, Luisa and Rob chat about:

  • Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).
  • What have been the biggest surprises for Rob in becoming a parent.
  • How Rob's dealt with work and parenting tradeoffs, and his advice for other would-be parents.
  • Rob's list of recommended purchases for new or upcoming parents.

This bonus episode includes excerpts from:

  • Ezra Klein on parenting yourself as well as your children (from episode #157)
  • Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)
  • Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)
  • Russ Roberts on empirical research when deciding whether to have kids (#87)
  • Spencer Greenberg on his surveys of parents (#183)
  • Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)
  • Bryan Caplan on homeschooling (#172)
  • Nita Farahany on thinking about life and the world differently with kids (#174)

Chapters:

  • Cold open (00:00:00)
  • Rob & Luisa’s intro (00:00:19)
  • Ezra Klein on parenting yourself as well as your children (00:03:34)
  • Holden Karnofsky on preparing for a kid and freezing embryos (00:07:41)
  • Emily Oster on the impact of kids on relationships (00:09:22)
  • Russ Roberts on empirical research when deciding whether to have kids (00:14:44)
  • Spencer Greenberg on parent surveys (00:23:58)
  • Elie Hassenfeld on how having children reframes his relationship to solving pressing problems (00:27:40)
  • Emily Oster on careers and kids (00:31:44)
  • Holden Karnofsky on the experience of having kids (00:38:44)
  • Bryan Caplan on homeschooling (00:40:30)
  • Emily Oster on what actually makes a difference in young kids' lives (00:46:02)
  • Nita Farahany on thinking about life and the world differently (00:51:16)
  • Rob’s first impressions of parenthood (00:52:59)
  • How Rob has changed his views about parenthood (00:58:04)
  • Can the pros and cons of parenthood be studied? (01:01:49)
  • Do people have skewed impressions of what parenthood is like? (01:09:24)
  • Work and parenting tradeoffs (01:15:26)
  • Tough decisions about screen time (01:25:11)
  • Rob’s advice to future parents (01:30:04)
  • Coda: Rob’s updated experience at nine months (01:32:09)
  • Emily Oster on her amazing nanny (01:35:01)

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

21 Nov 2024#208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world02:22:03

"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going to be really important and shape the future are not in the Overton window, because they sound weird and off-putting and very futuristic. But I think stories are the best way to bring them in." — Elizabeth Cox

In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world.

Links to learn more, highlights, and full transcript.

They cover:

  • How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good.
  • The existing empirical evidence for the impact of storytelling.
  • Their competing takes on the merits of thinking carefully about target audiences.
  • Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals.
  • Whether humans will stay relevant as creative writers with the rise of powerful AI models.
  • Whether you can do more good with an overtly educational show vs other approaches.
  • Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives.
  • The pros and cons of animation as a medium.
  • Career advice for creative writers.
  • Keiran’s idea for a longtermist Christmas movie.
  • And plenty more.

Check out Ada on YouTube!

Material you might want to check out before listening:

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:04)
  • The interview begins (00:02:52)
  • Is storytelling really a high-impact career option? (00:03:26)
  • Empirical evidence of the impact of storytelling (00:06:51)
  • How storytelling can inform us (00:16:25)
  • How long will humans stay relevant as creative writers? (00:21:54)
  • Ada (00:33:05)
  • Debating the merits of thinking about target audiences (00:38:03)
  • Ada vs other approaches to impact-focused storytelling (00:48:18)
  • Why animation (01:01:06)
  • One Billion Christmases (01:04:54)
  • How storytelling can humanise (01:09:34)
  • But can storytelling actually change strongly held opinions? (01:13:26)
  • Novels and short stories (01:18:38)
  • Creative nonfiction (01:25:06)
  • Other promising ways of storytelling (01:30:53)
  • How did Ada actually get made? (01:33:23)
  • The hardest part of the process for Elizabeth (01:48:28)
  • Elizabeth’s hopes and dreams for Ada (01:53:10)
  • Designing Ada with an eye toward impact (01:59:16)
  • Alternative topics for Ada (02:05:33)
  • Deciding on the best way to get Ada in front of people (02:07:12)
  • Career advice for creative writers (02:11:31)
  • Wikipedia book spoilers (02:17:05)
  • Luisa's outro (02:20:42)


Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

03 Jan 2022#67 Classic episode – David Chalmers on the nature and ethics of consciousness04:42:05

Rebroadcast: this episode was originally released in December 2019.

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience.

Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off.

The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem':

"Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?"

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

Links to learn more, summary and full transcript.

Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain.

Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness.

This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter.

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?

Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far.

Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

14 Aug 2023#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment02:36:42

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah Ritchie

In today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.

Links to learn more, summary and full transcript.

They cover:

  • Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get
  • Her new book about how we could be the first generation to build a sustainable planet
  • Whether climate change is the most worrying environmental issue
  • How we reduced outdoor air pollution
  • Why Hannah is worried about the state of ​​biodiversity
  • Solutions that address multiple environmental issues at once
  • How the world coordinated to address the hole in the ozone layer
  • Surprises from Our World in Data’s research
  • Psychological challenges that come up in Hannah’s work
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

07 Jan 2021#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]02:41:06
Rebroadcast: this episode was originally released in March 2020.

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong.

If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million.

This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now.

He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science.

Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here.

Links to learn more, summary and full transcript.
Latest version of Phil’s paper on the topic.

What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways?

And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.

Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?

Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?

Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'.

Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.

Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my colleague Howie Lempel, we try to answer that, and also discuss:

• Historical attempts at patient philanthropy
• Should we have a mixed strategy, where some altruists are patient and others impatient?
• Which causes most need money now?
• What is the research frontier here?
• What does this all mean for what listeners should do differently?

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the transcript linked above.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcripts: Zakee Ulhaq.

29 Jun 2021#104 – Pardis Sabeti on the Sentinel system for detecting and stopping pandemics02:20:58

When the first person with COVID-19 went to see a doctor in Wuhan, nobody could tell that it wasn’t a familiar disease like the flu — that we were dealing with something new.

 How much death and destruction could we have avoided if we'd had a hero who could? That's what the last Assistant Secretary of Defense Andy Weber asked on the show back in March.

Today’s guest Pardis Sabeti is a professor at Harvard, fought Ebola on the ground in Africa during the 2014 outbreak, runs her own lab, co-founded a company that produces next-level testing, and is even the lead singer of a rock band. If anyone is going to be that hero in the next pandemic — it just might be her.

Links to learn more, summary and full transcript.

She is a co-author of the SENTINEL proposal, a practical system for detecting new diseases quickly, using an escalating series of three novel diagnostic techniques.

The first method, called SHERLOCK, uses CRISPR gene editing to detect familiar viruses in a simple, inexpensive filter paper test, using non-invasive samples.

If SHERLOCK draws a blank, we escalate to the second step, CARMEN, an advanced version of SHERLOCK that uses microfluidics and CRISPR to simultaneously detect hundreds of viruses and viral strains. More expensive, but far more comprehensive.

If neither SHERLOCK nor CARMEN detects a known pathogen, it's time to pull out the big gun: metagenomic sequencing. More expensive still, but sequencing all the DNA in a patient sample lets you identify and track every virus — known and unknown — in a sample.

If Pardis and her team succeeds, our future pandemic potential patient zero may:

1. Go to the hospital with flu-like symptoms, and immediately be tested using SHERLOCK — which will come back negative
2. Take the CARMEN test for a much broader range of illnesses — which will also come back negative
3. Their sample will be sent for metagenomic sequencing, which will reveal that they're carrying a new virus we'll have to contend with
4. At all levels, information will be recorded in a cloud-based data system that shares data in real time; the hospital will be alerted and told to quarantine the patient
5. The world will be able to react weeks — or even months — faster, potentially saving millions of lives

It's a wonderful vision, and one humanity is ready to test out. But there are all sorts of practical questions, such as:

• How do you scale these technologies, including to remote and rural areas?
• Will doctors everywhere be able to operate them?
• Who will pay for it?
• How do you maintain the public’s trust and protect against misuse of sequencing data?
• How do you avoid drowning in the data the system produces?

In this conversation Pardis and Rob address all those questions, as well as:

• Pardis’ history with trying to control emerging contagious diseases
• The potential of mRNA vaccines
• Other emerging technologies
• How to best educate people about pandemics
• The pros and cons of gain-of-function research
• Turning mistakes into exercises you can learn from
• Overcoming enormous life challenges
• Why it’s so important to work with people you can laugh with
• And much more

Chapters:

  • The interview begins (00:01:40)
  • Trying to control emerging contagious diseases (00:04:36)
  • SENTINEL (00:15:31)
  • SHERLOCK (00:25:09)
  • CARMEN (00:36:32)
  • Metagenomic sequencing (00:51:53)
  • How useful these technologies could be (01:02:35)
  • How this technology could apply to the US (01:06:41)
  • Failure modes for this technology (01:18:34)
  • Funding (01:27:06)
  • mRNA vaccines (01:31:14)
  • Other emerging technologies (01:34:45)
  • Operation Outbreak (01:41:07)
  • COVID (01:49:16)
  • Gain-of-function research (01:57:34)
  • Career advice (02:01:47)
  • Overcoming big challenges (02:10:23)

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

29 May 2024#189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems02:48:51

"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people.

"But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel Glennerster

In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.

Links to learn more, highlights, and full transcript.

They cover:

  • How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.
  • How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.
  • The challenges in designing effective pull mechanisms, from design to implementation.
  • Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.
  • The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.
  • The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.
  • The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.
  • “Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.
  • Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.
  • And much more.

Chapters:

  • The Market Shaping Accelerator (00:03:33)
  • Pull mechanisms for innovation (00:13:10)
  • Accelerating the pneumococcal and COVID vaccines (00:19:05)
  • Advance market commitments (00:41:46)
  • Is this uncertainty hard for funders to plan around? (00:49:17)
  • The story of the malaria vaccine that wasn’t (00:57:15)
  • Challenges with designing and implementing AMCs and other pull mechanisms (01:01:40)
  • Universal COVID vaccine (01:18:14)
  • Climate-resilient crops (01:34:09)
  • The Market Shaping Accelerator’s Innovation Challenge (01:45:40)
  • Indoor air quality to reduce respiratory infections (01:49:09)
  • Repurposing generic drugs (01:55:50)
  • Clean air conditioning units (02:02:41)
  • Broad-spectrum antivirals for pandemic prevention (02:09:11)
  • Improving education in low- and middle-income countries (02:15:53)
  • What’s still weird for Rachel about living in the US? (02:45:06)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

01 Mar 2022Introducing 80k After Hours00:13:31
Today we're launching a new podcast called 80k After Hours.

Like this show it’ll mostly still explore the best ways to do good — and some episodes will be even more laser-focused on careers than most original episodes.

But we’re also going to widen our scope, including things like how to solve pressing problems while also living a happy and fulfilling life, as well as releases that are just fun, entertaining or experimental.

It’ll feature:

  • Conversations between staff on the 80,000 Hours team
  • More eclectic formats and topics — one episode could be a structured debate about 'human challenge trials', the next a staged reading of a play about the year 2750
  • Niche content for specific audiences, such as high-school students, or active participants in the effective altruism community
  • Extras and outtakes from interviews on the original feed
  • 80,000 Hours staff interviewed on other podcasts
  • Audio versions of our new articles and research
You can find it by searching for 80k After Hours in whatever podcasting app you use, or by going to 80000hours.org/after-hours-podcast.
18 Apr 2024#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals02:33:12

"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.' I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis Bollard

In today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.

Links to learn more, highlights, and full transcript.

They cover:

  • The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.
  • Work to improve farmed animal welfare that Open Philanthropy is excited about funding.
  • The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.
  • The occasional tension between ending factory farming and curbing climate change
  • How AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.
  • How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.
  • Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.
  • How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.
  • And much more.

Chapters:

  • Common objections to ending factory farming (00:13:21)
  • Potential solutions (00:30:55)
  • Cage-free reforms (00:34:25)
  • Broiler chicken welfare (00:46:48)
  • Do companies follow through on these commitments? (01:00:21)
  • Fish welfare (01:05:02)
  • Alternatives to animal proteins (01:16:36)
  • Farm animal welfare in Asia (01:26:00)
  • Farm animal welfare in Europe (01:30:45)
  • Animal welfare science (01:42:09)
  • Approaches Lewis is less excited about (01:52:10)
  • Will we end factory farming in our lifetimes? (01:56:36)
  • Effect of AI (01:57:59)
  • Recent big wins for farm animals (02:07:38)
  • How animal advocacy has changed since Lewis first got involved (02:15:57)
  • Response to the Moral Weight Project (02:19:52)
  • How to help (02:28:14)

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

25 Feb 2025#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value03:41:31

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount!

Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

Rebroadcast: this episode was originally released in October 2022.

Links to learn more, highlights, and full transcript.

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong.

In this conversation, originally released in October 2022, Alan and Rob explore these issues and many others:

  • Simple rules of thumb for having philosophical insights
  • A key flaw that hid in Pascal's wager from the very beginning
  • Whether we have to simply ignore infinities because they mess everything up
  • What fundamentally is 'probability'?
  • Some of the many reasons 'frequentism' doesn't work as an account of probability
  • Why the standard account of counterfactuals in philosophy is deeply flawed
  • And why counterfactuals present a fatal problem for one sort of consequentialism

Chapters:

  • Cold open {00:00:00}
  • Rob's intro {00:01:05}
  • The interview begins {00:05:28}
  • Philosophical methodology {00:06:35}
  • Theories of probability {00:40:58}
  • Everyday Bayesianism {00:49:42}
  • Frequentism {01:08:37}
  • Ranges of probabilities {01:20:05}
  • Implications for how to live {01:25:05}
  • Expected value {01:30:39}
  • The St. Petersburg paradox {01:35:21}
  • Pascal’s wager {01:53:25}
  • Using expected value in everyday life {02:07:34}
  • Counterfactuals {02:20:19}
  • Most counterfactuals are false {02:56:06}
  • Relevance to objective consequentialism {03:13:28}
  • Alan’s best conference story {03:37:18}
  • Rob's outro {03:40:22}

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

22 Jun 2020#80 – Stuart Russell on why our approach to AI is broken and how to fix it02:13:17

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed.

In his new book, Human Compatible, he outlines the 'standard model' of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we've stated explicitly. This is so obvious it almost doesn't even seem like a design choice, but it is.

Unfortunately there's a big problem with this approach: it's incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we've asked it to. That's true even if the goal isn't what we really want, or the methods it's choosing are ones we would never accept.

We already see AIs misbehaving for this reason. Stuart points to the example of YouTube's recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn't something we wanted, but it helped achieve the algorithm's objective: maximise viewing time.

Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we've asked for.

Links to learn more, summary and full transcript.

This 'alignment' problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we're ever to hand over much of the economy to thinking machines, we can't count on ourselves correctly saying exactly what we want the AI to do every time.

Stuart isn't just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles:

1. The AI system's objective is to achieve what humans want.
2. But the system isn't sure what we want.
3. And it figures out what we want by observing our behaviour.
Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI. 

For instance, a machine built on these principles would be happy to be turned off if that's what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, "you can't fetch the coffee if you're dead."

These principles lend themselves towards machines that are modest and cautious, and check in when they aren't confident they're truly achieving what we want.

We've made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we've rejected an option because we've considered it and decided it's a bad idea, and when we simply haven't thought about it at all.

Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political.

When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:19:06)
  • Human Compatible: Artificial Intelligence and the Problem of Control (00:21:27)
  • Principles for Beneficial Machines (00:29:25)
  • AI moral rights (00:33:05)
  • Humble machines (00:39:35)
  • Learning to predict human preferences (00:45:55)
  • Animals and AI (00:49:33)
  • Enfeeblement problem (00:58:21)
  • Counterarguments (01:07:09)
  • Orthogonality thesis (01:24:25)
  • Intelligence explosion (01:29:15)
  • Policy ideas (01:38:39)
  • What most needs to be done (01:50:14)

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

26 Jul 2024#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government03:04:18

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Links to learn more, highlights, video, and full transcript.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:00:56)
  • The interview begins (00:04:47)
  • Three different views on technology (00:05:46)
  • Vitalik’s updated probability of doom (00:09:25)
  • Technology is amazing, and AI is fundamentally different from other tech (00:15:55)
  • Fear of totalitarianism and finding middle ground (00:22:44)
  • Should AI be more centralised or more decentralised? (00:42:20)
  • Humans merging with AIs to remain relevant (01:06:59)
  • Vitalik’s “d/acc” alternative (01:18:48)
  • Biodefence (01:24:01)
  • Pushback on Vitalik’s vision (01:37:09)
  • How much do people actually disagree? (01:42:14)
  • Cybersecurity (01:47:28)
  • Information defence (02:01:44)
  • Is AI more offence-dominant or defence-dominant? (02:21:00)
  • How Vitalik communicates among different camps (02:25:44)
  • Blockchain applications with social impact (02:34:37)
  • Rob’s outro (03:01:00)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

02 Jun 2023#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work02:56:10

GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the world's poorest people avoid easily prevented diseases, like intestinal worms or vitamin A deficiency.

But should GiveWell, as some critics argue, take a totally different approach to its search, focusing instead on directly increasing subjective wellbeing, or alternatively, raising economic growth?

Today's guest — cofounder and CEO of GiveWell, Elie Hassenfeld — is proud of how much GiveWell has grown in the last five years. Its 'money moved' has quadrupled to around $600 million a year.

Its research team has also more than doubled, enabling them to investigate a far broader range of interventions that could plausibly help people an enormous amount for each dollar spent. That work has led GiveWell to support dozens of new organisations, such as Kangaroo Mother Care, MiracleFeet, and Dispensers for Safe Water.

But some other researchers focused on figuring out the best ways to help the world's poorest people say GiveWell shouldn't just do more of the same thing, but rather ought to look at the problem differently.

Links to learn more, summary and full transcript.

Currently, GiveWell uses a range of metrics to track the impact of the organisations it considers recommending — such as 'lives saved,' 'household incomes doubled,' and for health improvements, the 'quality-adjusted life year.' 

The Happier Lives Institute (HLI) has argued that instead, GiveWell should try to cash out the impact of all interventions in terms of improvements in subjective wellbeing. This philosophy has led HLI to be more sceptical of interventions that have been demonstrated to improve health, but whose impact on wellbeing has not been measured, and to give a high priority to improving lives relative to extending them.

An alternative high-level critique is that really all that matters in the long run is getting the economies of poor countries to grow. On this view, GiveWell should focus on figuring out what causes some countries to experience explosive economic growth while others fail to, or even go backwards. Even modest improvements in the chances of such a 'growth miracle' will likely offer a bigger bang-for-buck than funding the incremental delivery of deworming tablets or vitamin A supplements, or anything else.

Elie sees where both of these critiques are coming from, and notes that they've influenced GiveWell's work in some ways. But as he explains, he thinks they underestimate the practical difficulty of successfully pulling off either approach and finding better opportunities than what GiveWell funds today. 

In today's in-depth conversation, Elie and host Rob Wiblin cover the above, as well as:

  • Why GiveWell flipped from not recommending chlorine dispensers as an intervention for safe drinking water to spending tens of millions of dollars on them
  • What transferable lessons GiveWell learned from investigating different kinds of interventions
  • Why the best treatment for premature babies in low-resource settings may involve less rather than more medicine.
  • Severe malnourishment among children and what can be done about it.
  • How to deal with hidden and non-obvious costs of a programme
  • Some cheap early treatments that can prevent kids from developing lifelong disabilities
  • The various roles GiveWell is currently hiring for, and what's distinctive about their organisational culture
  • And much more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:03:14)
  • GiveWell over the last couple of years (00:04:33)
  • Dispensers for Safe Water (00:11:52)
  • Syphilis diagnosis for pregnant women via technical assistance (00:30:39)
  • Kangaroo Mother Care (00:48:47)
  • Multiples of cash (01:01:20)
  • Hidden costs (01:05:41)
  • MiracleFeet (01:09:45)
  • Serious malnourishment among young children (01:22:46)
  • Vitamin A deficiency and supplementation (01:40:42)
  • The subjective wellbeing approach in contrast with GiveWell's approach (01:46:31)
  • The value of saving a life when that life is going to be very difficult (02:09:09)
  • Whether economic policy is what really matters overwhelmingly (02:20:00)
  • Careers at GiveWell (02:39:10)
  • Donations (02:48:58)
  • Parenthood (02:50:29)
  • Rob’s outro (02:55:05)

Producer: Keiran Harris

Audio mastering: Simon Monsour and Ben Cordell

Transcriptions: Katy Moore

21 Jan 2021#90 – Ajeya Cotra on worldview diversification and how big the future could be02:59:05
You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928?

In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as 'anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving.

Links to learn more, summary and full transcript.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by 'longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called 'doomsday argument' alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

• Which worldviews Open Phil finds most plausible, and how it balances them
• How hard it is to get to other solar systems
• The 'simulation argument'
• When transformative AI might actually arrive
• And much more

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

12 Oct 2023#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere03:08:49

"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space?

That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.

My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum Collins

In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.

Links to learn more, highlights, and full transcript.

They cover:

  • How AI could strengthen government capacity, and how that's a double-edged sword
  • How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't there
  • To what extent policymakers take different threats from AI seriously
  • Whether the US and China are in an AI arms race or not
  • Whether it's OK to transform the world without much of the world agreeing to it
  • The tyranny of small differences in AI policy
  • Disagreements between different schools of thought in AI policy, and proposals that could unite them
  • How the US AI Bill of Rights could be improved
  • Whether AI will transform the labour market, and whether it will become a partisan political issue
  • The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
  • What listeners might be able to do to help with this whole mess
  • Panpsychism
  • Plenty more

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:01:00)
  • The interview begins (00:04:01)
  • The risk of autocratic lock-in due to AI (00:10:02)
  • The state of play in AI policymaking (00:13:40)
  • China and AI (00:32:12)
  • The most promising regulatory approaches (00:57:51)
  • Transforming the world without the world agreeing (01:04:44)
  • AI Bill of Rights (01:17:32)
  • Who’s ultimately responsible for the consequences of AI? (01:20:39)
  • Policy ideas that could appeal to many different groups (01:29:08)
  • Tension between those focused on x-risk and those focused on AI ethics (01:38:56)
  • Communicating with policymakers (01:54:22)
  • Is AI going to transform the labour market in the next few years? (01:58:51)
  • Is AI policy going to become a partisan political issue? (02:08:10)
  • The value of political philosophy (02:10:53)
  • Tantum’s work at DeepMind (02:21:20)
  • CSET (02:32:48)
  • Career advice (02:35:21)
  • Panpsychism (02:55:24)


Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

14 Feb 2025#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway02:44:07

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.

That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.

Links to learn more, highlights, video, and full transcript.

This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.

Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.

But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.

As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.

As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.

Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.

That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.

But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.

Host Rob and Allan also cover:

  • The most exciting beneficial applications of AI
  • Whether and how we can influence the development of technology
  • What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
  • Why cooperative AI may be as important as aligned AI
  • The role of democratic input in AI governance
  • What kinds of experts are most needed in AI safety and governance
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Who's Allan Dafoe? (00:00:48)
  • Allan's role at DeepMind (00:01:27)
  • Why join DeepMind over everyone else? (00:04:27)
  • Do humans control technological change? (00:09:17)
  • Arguments for technological determinism (00:20:24)
  • The synthesis of agency with tech determinism (00:26:29)
  • Competition took away Japan's choice (00:37:13)
  • Can speeding up one tech redirect history? (00:42:09)
  • Structural pushback against alignment efforts (00:47:55)
  • Do AIs need to be 'cooperatively skilled'? (00:52:25)
  • How AI could boost cooperation between people and states (01:01:59)
  • The super-cooperative AGI hypothesis and backdoor risks (01:06:58)
  • Aren’t today’s models already very cooperative? (01:13:22)
  • How would we make AIs cooperative anyway? (01:16:22)
  • Ways making AI more cooperative could backfire (01:22:24)
  • AGI is an essential idea we should define well (01:30:16)
  • It matters what AGI learns first vs last (01:41:01)
  • How Google tests for dangerous capabilities (01:45:39)
  • Evals 'in the wild' (01:57:46)
  • What to do given no single approach works that well (02:01:44)
  • We don't, but could, forecast AI capabilities (02:05:34)
  • DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)
  • How 'structural risks' can force everyone into a worse world (02:15:01)
  • Is AI being built democratically? Should it? (02:19:35)
  • How much do AI companies really want external regulation? (02:24:34)
  • Social science can contribute a lot here (02:33:21)
  • How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore

19 Mar 2019#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms02:53:40

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm.

How is this possible and what does it show?

In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems.

A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy.

Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map.

When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space.

Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it

Links to learn more, summary and full transcript

This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software.

The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy.

Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2.

We discuss:

• What are the most significant changes in the AI policy world over the last year or two?
• What capabilities are likely to develop over the next five, 10, 15, 20 years?
• How much should we focus on the next couple of years, versus the next couple of decades?
• How should we approach possible malicious uses of AI?
• What are some of the potential ways OpenAI could make things worse, and how can they be avoided?
• Publication norms for AI research
• Where do we stand in terms of arms races between countries or different AI labs?
• The case for creating newsletters
• Should the AI community have a closer relationship to the military?
• Working at OpenAI vs. working in the US government
• How valuable is Twitter in the AI policy world?

Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss:

• The reaction to OpenAI's release of GPT-2
• Jack’s critique of our US AI policy article
• How valuable are roles in government?
• Where do you start if you want to write content for a specific audience?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

23 Oct 2023#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion02:43:55

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't.

What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian Morris

In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence.

Links to learn more, summary and full transcript.

They cover:

  • Some crazy anomalies in the historical record of civilisational progress
  • Whether we should think about technology from an evolutionary perspective
  • Whether we ought to expect war to make a resurgence or continue dying out
  • Why we can't end up living like The Jetsons
  • Whether stagnation or cyclical recurring futures seem very plausible
  • What it means that the rate of increase in the economy has been increasing
  • Whether violence is likely between humans and powerful AI systems
  • The most likely reasons for Rob and Ian to be really wrong about all of this
  • How professional historians react to this sort of talk
  • The future of Ian’s work
  • Plenty more

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:01:27)
  • Why we should expect the future to be wild (00:04:08)
  • How historians have reacted to the idea of radically different futures (00:21:20)
  • Why we won’t end up in The Jetsons (00:26:20)
  • The rise of machine intelligence (00:31:28)
  • AI from an evolutionary point of view (00:46:32)
  • Is violence likely between humans and powerful AI systems? (00:59:53)
  • Most troubling objections to this approach in Ian’s view (01:28:20)
  • Confronting anomalies in the historical record (01:33:10)
  • The cyclical view of history (01:56:11)
  • Is stagnation plausible? (02:01:38)
  • The limit on how long this growth trend can continue (02:20:57)
  • The future of Ian’s work (02:37:17)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

03 Apr 2023#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't02:17:28

If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no.

 Today's guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting.

 In reality you don't want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment.

 Links to learn more, summary and full transcript.

Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we're familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one.

In short: we're uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world.

That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don't succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels.

In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don't work out?

Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage.

If you're optimistic about renewables, as Johannes is, then that's all the more reason to relax about scenarios where they work as planned, and focus one's efforts on the possibility that they don't.

And Johannes notes that the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn't, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come.

In today's extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as:

• Retooling newly built coal plants in the developing world
• Specific clean energy technologies like geothermal and nuclear fusion
• Possible biases among environmentalists and climate philanthropists
• How climate change compares to other risks to humanity
• In what kinds of scenarios future emissions would be highest
• In what regions climate philanthropy is most concentrated and whether that makes sense
• Attempts to decarbonise aviation, shipping, and industrial processes
• The impact of funding advocacy vs science vs deployment
• Lessons for climate change focused careers
• And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

28 Aug 2020Global issues beyond 80,000 Hours’ current priorities (Article)00:32:54
Today’s release is the latest in our series of audio versions of our articles.

In this one, we go through 30 global issues beyond the ones we usually prioritize most highly in our work, and that you might consider focusing your career on tackling.

Although we spend the majority of our time at 80,000 Hours on our highest priority problem areas, and we recommend working on them to many of our readers, these are just the most promising issues among those we’ve spent time investigating. There are many other global issues that we haven’t properly investigated, and which might be very promising for more people to work on.

In fact, we think working on some of the issues in this article could be as high-impact for some people as working on our priority problem areas — though we haven’t looked into them enough to be confident.

If you want to check out the links in today’s article, you can find those here.

Our annual user survey is also now open for submissions.

Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you.

80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand.

This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too.

We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative.

That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation.

Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year.

So please do take a moment to fill out the user survey.

You can find it at 80000hours.org/survey

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

07 Feb 2025#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions03:10:21

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong.

Rebroadcast: this episode was originally released in March 2022.

Links to learn more, highlights, and full transcript.

Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish.

First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running.

Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries.

'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves.

While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing.

Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget.

In this in-depth conversation, originally released in March 2022, Karen Levy and host Rob Wiblin chat about the above, as well as:

  • Why it pays to figure out how you'll interpret the results of an experiment ahead of time
  • The trouble with misaligned incentives within the development industry
  • Projects that don't deliver value for money and should be scaled down
  • How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren
  • Logistical challenges in reaching huge numbers of people with essential services
  • Lessons from Karen's many-decades career
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:01:33)
  • The interview begins (00:02:21)
  • Funding for effective altruist–mentality development projects (00:04:59)
  • Pre-policy plans (00:08:36)
  • ‘Sustainability’, and other myths in typical international development practice (00:21:37)
  • ‘Participatoriness’ (00:36:20)
  • ‘Holistic approaches’ (00:40:20)
  • How the development industry sees evidence-based development (00:51:31)
  • Initiatives in Africa that should be significantly curtailed (00:56:30)
  • Misaligned incentives within the development industry (01:05:46)
  • Deworming: the early days (01:21:09)
  • The problem of deworming (01:34:27)
  • Deworm the World (01:45:43)
  • Where the majority of the work was happening (01:55:38)
  • Logistical issues (02:20:41)
  • The importance of a theory of change (02:31:46)
  • Ways that things have changed since 2006 (02:36:07)
  • Academic work vs policy work (02:38:33)
  • Fit for Purpose (02:43:40)
  • Living in Kenya (03:00:32)
  • Underrated life advice (03:05:29)
  • Rob’s outro (03:09:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

27 Dec 2021#59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable01:43:05

Rebroadcast: this episode was originally released in June 2019.

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.

The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.

In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.

How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?

Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.

He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.

Links to learn more, summary and full transcript.

In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions.

According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case.

In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:

• How much people misrepresent their views in democratic countries.
• Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
• When is it justified to encourage your own group to polarise?
• Sunstein's difficult experiences as a pioneer of animal rights law.
• Whether activists can do better by spending half their resources on public opinion surveys.
• Should people be more or less outspoken about their true views?
• What might be the next social revolution to take off?
• How can we learn about social movements that failed and disappeared?
• How to find out what people really think.

Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site.

The 80,000 Hours Podcast is produced by Keiran Harris.

27 Feb 2019#53 - Kelsey Piper on the room for important advocacy within journalism02:34:31
“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets?

Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that.

Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work.

But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle.

They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics.

Links to learn more, summary and full transcript.

Links to Kelsey's top articles.

Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them.

Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.”

Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics.

If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems.

Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself.

In today’s episode we discuss that path, as well as:

• What’s the day to day life of a Vox journalist like?
• How can good journalism get funded?
• Are there meaningful tradeoffs between doing what's in the interest of Vox and doing what’s good?
• How concerned should we be about the risk of effective altruism being perceived as partisan?
• How well can short articles effectively communicate complicated ideas?
• Are there alternative business models that could fund high quality journalism on a larger scale?
• How do you approach the case for taking AI seriously to a broader audience?
• How valuable might it be for media outlets to do Tetlock-style forecasting?
• Is it really a good idea to heavily tax billionaires?
• How do you avoid the pressure to get clicks?
• How possible is it to predict which articles are going to be popular?
• How did Kelsey build the skills necessary to work at Vox?
• General lessons for people dealing with very difficult life circumstances

Rob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss:

• The risk political polarisation poses to long-termist causes
• How should specialists keep journalism available as a career option?
• Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

01 Aug 2024#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them02:08:29

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella Nevo

In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

Links to learn more, highlights, and full transcript.

They cover:

  • Real-world examples of sophisticated security breaches, and what we can learn from them.
  • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
  • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
  • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
  • New security measures that Sella hopes can mitigate with the growing risks.
  • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
  • And plenty more.

Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! 

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:00:56)
  • The interview begins (00:02:30)
  • The importance of securing the model weights of frontier AI models (00:03:01)
  • The most sophisticated and surprising security breaches (00:10:22)
  • AI models being leaked (00:25:52)
  • Researching for the RAND report (00:30:11)
  • Who tries to steal model weights? (00:32:21)
  • Malicious code and exploiting zero-days (00:42:06)
  • Human insiders (00:53:20)
  • Side-channel attacks (01:04:11)
  • Getting access to air-gapped networks (01:10:52)
  • Model extraction (01:19:47)
  • Reducing and hardening authorised access (01:38:52)
  • Confidential computing (01:48:05)
  • Red-teaming and security testing (01:53:42)
  • Careers in information security (01:59:54)
  • Sella’s work on flood forecasting systems (02:01:57)
  • Luisa’s outro (02:04:51)


Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

22 Oct 2021#114 – Maha Rehman on working with governments to rapidly deliver masks to millions of people01:42:55

It’s hard to believe, but until recently there had never been a large field trial that addressed these simple and obvious questions:

1. When ordinary people wear face masks, does it actually reduce the spread of respiratory diseases?
2. And if so, how do you get people to wear masks more often?

It turns out the first question is remarkably challenging to answer, but it's well worth doing nonetheless. Among other reasons, the first good trial of this prompted Maha Rehman — Policy Director at the Mahbub Ul Haq Research Centre — as well as a range of others to immediately use the findings to help tens of millions of people across South Asia, even before the results were public.

Links to learn more, summary and full transcript.

The groundbreaking Bangladesh RCT that inspired her to take action found that:

• A 30% increase in mask wearing reduced total infections by 10%.
• The effect was more pronounced for surgical masks compared to cloth masks (plus ~50% effectiveness).
• Mask wearing also led to an increase in social distancing.
• Of all the incentives tested, the only thing that impacted mask wearing was their colour (people preferred blue over green, and red over purple!).

The research was done by social scientists at Yale, Berkeley, and Stanford, among others. It applied a program they called ‘NORM’ in half of 600 villages in which about 350,000 people lived. NORM has four components, which the researchers expected would work well for the general public:

N: no-cost distribution
O: offering information
R: reinforcing the message and the information in the field
M: modeling

Basically you make sure a community has enough masks and you tell them why it’s important to wear them. You also reinforce the message periodically in markets and mosques, and via role models and promoters in the community itself.

Tipped off that these positive findings were on the way, Maha took this program and rushed to put it into action in Lahore, Pakistan, a city with a population of about 13 million, before the Delta variant could sweep through the region.

Maha had already been doing a lot of data work on COVID policy over the past year, and that allowed her to quickly reach out to the relevant stakeholders — getting them interested and excited.

Governments aren’t exactly known for being super innovative, but in March and April Lahore was going through a very deadly third wave of COVID — so the commissioner quickly jumped on this approach, providing an endorsement as well as resources.

Together with the original researchers, Maha and her team at LUMS collected baseline data that allowed them to map the mask-wearing rate in every part of Lahore, in both markets and mosques. And then based on that data, they adapted the original rural-focused model to a very different urban setting.

The scale of this project was daunting, and in today’s episode Maha tells Rob all about the day-to-day experiences and stresses required to actually make it happen.

They also discuss:

• The challenges of data collection in this context
• Disasters and emergencies she had to respond to in the middle of the project
• What she learned from working closely with the Lahore Commissioner's Office
• How to get governments to provide you with large amounts of data for your research
• How she adapted from a more academic role to a ‘getting stuff done’ role
• How to reduce waste in government procurement
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:33)
  • Bangladesh RCT (00:06:24)
  • The NORM model (00:08:34)
  • Results of the experiment (00:10:46)
  • Experimental design (00:20:35)
  • Adapting the findings from Bangladesh to Lahore (00:23:55)
  • Collecting data (00:34:09)
  • Working with governments (00:38:38)
  • Coordination (00:44:53)
  • Disasters and emergencies (00:56:01)
  • Sending out masks to every single person in Lahore (00:59:15)
  • How Maha adapted to her role (01:07:17)
  • Logistic aptitude (01:11:45)
  • Disappointments (01:14:13)
  • Procurement RCT (01:16:51)
  • What we can learn (01:31:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

21 Feb 2024#180 – Hugo Mercier on why gullibility and misinformation are overrated02:36:55

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.

And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.

But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.

Links to learn more, summary, and full transcript.

In this interview, host Rob Wiblin and Hugo discuss:

  • How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.
  • How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.
  • Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.
  • Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.
  • The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.
  • Why fake news and conspiracy theories actually have less impact than most people assume.
  • False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.
  • And plenty more.

Chapters:

  • The view that humans are really gullible (00:04:26)
  • The evolutionary argument against humans being gullible (00:07:46) 
  • Open vigilance (00:18:56)
  • Intuitive and reflective beliefs (00:32:25)
  • How people decide who to trust (00:41:15)
  • Redefining beliefs (00:51:57)
  • Bloodletting (01:00:38)
  • Vaccine hesitancy and creationism (01:06:38)
  • False beliefs without skin in the game (01:12:36)
  • One consistent weakness in human judgement (01:22:57)
  • Trying to explain harmful financial decisions (01:27:15)
  • Astrology (01:40:40)
  • Medical treatments that don’t work (01:45:47)
  • Generative AI, LLMs, and persuasion (01:54:50)
  • Ways AI could improve the information environment (02:29:59)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

22 Apr 2023Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)01:17:28
In this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world focused on shrimp welfare specifically, and as of recording in June 2022, has six full-time staff.

Links to learn more, highlights and full transcript.

They cover:

• The evidence for shrimp sentience
• How farmers and the public feel about shrimp
• The scale of the problem
• What shrimp farming looks like
• The killing process, and other welfare issues
• Shrimp Welfare Project’s strategy
• History of shrimp welfare work
• What it’s like working in India and Vietnam
• How to help

Who this episode is for:

• People who care about animal welfare
• People interested in new and unusual problems
• People open to shrimp sentience

Who this episode isn’t for:

• People who think shrimp couldn’t possibly be sentient
• People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again

Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ‘80k After Hours’ into your podcasting app

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

19 May 2021#100 – Having a successful career with depression, anxiety and imposter syndrome02:51:21
Today's episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!).

The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it's rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so.

Links to learn more, summary and full transcript.

The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today.

The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort.

Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better.

Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes.

Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world.

We hope that the episode will:

1. Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles.

2. Give insight into what it's like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully.

So we think this episode will be valuable for:

• People who have experienced mental health problems or might in future;
• People who have had troubles with stress, anxiety, low mood, low self esteem, and similar issues, even if their experience isn’t well described as ‘mental illness’;
• People who have never experienced these problems but want to learn about what it's like, so they can better relate to and assist family, friends or colleagues who do.

In other words, we think this episode could be worthwhile for almost everybody.

Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts.

If you don’t want to hear the most intense section, you can skip the chapter called ‘Disaster’ (44–57mins). And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’ (1hr 11mins).

If you're feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the U.S. (800-273-8255) and Samaritans in the U.K. (116 123).

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

19 Mar 2020Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help01:52:12

From home isolation Rob and Howie just recorded an episode on:

1. How many could die in the crisis, and the risk to your health personally.
2. What individuals might be able to do help tackle the coronavirus crisis.
3. What we suspect governments should do in response to the coronavirus crisis.
4. The importance of personally not spreading the virus, the properties of the SARS-CoV-2 virus, and how you can personally avoid it.
5. The many places society screwed up, how we can avoid this happening again, and why be optimistic. 

We have rushed this episode out to share information as quickly as possible in a fast-moving situation. If you would prefer to read you can find the transcript here.

We list a wide range of valuable resources and links in the blog post attached to the show (over 60, including links to projects you can join).

See our 'problem profile' on global catastrophic biological risks for information on these grave threats and how you can contribute to preventing them.

We have also just added a COVID-19 landing page on our site.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris.

04 Sep 2024#200 – Ezra Karger on what superforecasters and experts think about existential risks02:49:24

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger

In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

Links to learn more, highlights, and full transcript.

They cover:

  • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
  • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
  • The challenges of predicting low-probability, high-impact events.
  • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
  • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
  • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
  • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
  • Whether large language models could help or outperform human forecasters.
  • How people can improve their calibration and start making better forecasts personally.
  • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:07)
  • The interview begins (00:02:54)
  • The Existential Risk Persuasion Tournament (00:05:13)
  • Why is this project important? (00:12:34)
  • How was the tournament set up? (00:17:54)
  • Results from the tournament (00:22:38)
  • Risk from artificial intelligence (00:30:59)
  • How to think about these numbers (00:46:50)
  • Should we trust experts or superforecasters more? (00:49:16)
  • The effect of debate and persuasion (01:02:10)
  • Forecasts from the general public (01:08:33)
  • How can we improve people’s forecasts? (01:18:59)
  • Incentives and recruitment (01:26:30)
  • Criticisms of the tournament (01:33:51)
  • AI adversarial collaboration (01:46:20)
  • Hypotheses about stark differences in views of AI risk (01:51:41)
  • Cruxes and different worldviews (02:17:15)
  • Ezra’s experience as a superforecaster (02:28:57)
  • Forecasting as a research field (02:31:00)
  • Can large language models help or outperform human forecasters? (02:35:01)
  • Is forecasting valuable in the real world? (02:39:11)
  • Ezra’s book recommendations (02:45:29)
  • Luisa's outro (02:47:54)


Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

27 Nov 2024#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit01:22:08

One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?

Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.

Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?

That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.

Links to learn more, highlights, video, and full transcript.

As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:

  • Can fire the CEO.
  • Would receive all the profits after the point OpenAI makes 100x returns on investment.
  • Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”

But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).

Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.

So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.

Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?

OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.

Chapters:

  • Cold open (00:00:00)
  • What's coming up (00:00:50)
  • Who is Rose Chan Loui? (00:03:11)
  • How OpenAI carefully chose a complex nonprofit structure (00:04:17)
  • OpenAI's new plan to become a for-profit (00:11:47)
  • The nonprofit board is out-resourced and in a tough spot (00:14:38)
  • Who could be cheated in a bad conversion to a for-profit? (00:17:11)
  • Is this a unique case? (00:27:24)
  • Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58)
  • The crazy difficulty of valuing the profits OpenAI might make (00:35:21)
  • Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22)
  • It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37)
  • Is it a farce to call this an "arm's-length transaction"? (01:03:50)
  • How the nonprofit board can best play their hand (01:09:04)
  • Who can mount a court challenge and how that would work (01:15:41)
  • Rob's outro (01:21:25)

Producer: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

14 Jun 2022#132 – Nova DasSarma on why information security may be critical to the safe development of AI systems02:42:27
If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free.

This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

Links to learn more, summary and full transcript.

The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off.

As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly.

If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

In today's conversation, Rob and Nova cover:

• How good or bad is information security today
• The most secure computer systems that exist
• How to design an AI training compute centre for maximum efficiency
• Whether 'formal verification' can help us design trustworthy systems
• How wide the gap is between AI capabilities and AI safety
• How to disincentivise hackers
• What should listeners do to strengthen their own security practices
• And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

03 Feb 2020Rob & Howie on what we do and don't know about 2019-nCoV01:18:44

Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.

See this list of resources, including many discussed in the episode, to learn more.

In the 1h15m conversation we cover:

• What is it? 
• How many people have it? 
• How contagious is it? 
• What fraction of people who contract it die?
• How likely is it to spread out of control?
• What's the range of plausible fatalities worldwide?
• How does it compare to other epidemics?
• What don't we know and why? 
• What actions should listeners take, if any?
• How should the complexities of the above be communicated by public health professionals?

Here's a link to the hygiene advice from Laurie Garrett mentioned in the episode.

Recorded 2 Feb 2020.

The 80,000 Hours Podcast is produced by Keiran Harris.

19 Sep 2024#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science02:20:26

"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki Ramakrishnan

In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.

Links to learn more, highlights, and full transcript.

They cover:

  • What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.
  • Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.
  • Why eliminating major age-related diseases might only extend average lifespan by 15 years.
  • The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:04)
  • The interview begins (00:02:21)
  • Reasons to explore why we age and die (00:02:35)
  • Evolutionary pressures and animals that don't biologically age (00:06:55)
  • Why does ageing cause us to die? (00:12:24)
  • Is there a hard limit to the human lifespan? (00:17:11)
  • Evolutionary tradeoffs between fitness and longevity (00:21:01)
  • How ageing resets with every generation, and what we can learn from clones (00:23:48)
  • Younger blood (00:31:20)
  • Freezing cells, organs, and bodies (00:36:47)
  • Are the goals of anti-ageing research even realistic? (00:43:44)
  • Dementia (00:49:52)
  • Senescence (01:01:58)
  • Caloric restriction and metabolic pathways (01:11:45)
  • Yamanaka factors (01:34:07)
  • Cancer (01:47:44)
  • Mitochondrial dysfunction (01:58:40)
  • Population effects of extended lifespan (02:06:12)
  • Could increased longevity increase inequality? (02:11:48)
  • What’s surprised Venki about this research (02:16:06)
  • Luisa's outro (02:19:26)

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

01 Nov 2023#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down02:57:46

"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world.

"That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh Harish

In today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.

Links to learn more, summary, and full transcript.

They cover:

  • How bad air pollution is for our health and life expectancy
  • The different kinds of harm that particulate pollution causes
  • The strength of the evidence that it damages our brain function and reduces our productivity
  • Whether it was a mistake to switch our attention to climate change and away from air pollution
  • Whether most listeners to this show should have an air purifier running in their house right now
  • Where air pollution in India is worst and why, and whether it's going up or down
  • Where most air pollution comes from
  • The policy blunders that led to many sources of air pollution in India being effectively unregulated
  • Why indoor air pollution packs an enormous punch
  • The politics of air pollution in India
  • How India ended up spending a lot of money on outdoor air purifiers
  • The challenges faced by foreign philanthropists in India
  • Why Santosh has made the grants he has so far
  • And plenty more

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:01:07)
  • How bad is air pollution? (00:03:41)
  • Quantifying the scale of the damage (00:15:47)
  • Effects on cognitive performance and mood (00:24:19)
  • How do we really know the harms are as big as is claimed? (00:27:05)
  • Misconceptions about air pollution (00:36:56)
  • Why don’t environmental advocacy groups focus on air pollution? (00:42:22)
  • How listeners should approach air pollution in their own lives (00:46:58)
  • How bad is air pollution in India in particular (00:54:23)
  • The trend in India over the last few decades (01:12:33)
  • Why aren’t people able to fix these problems? (01:24:17)
  • Household waste burning (01:35:06)
  • Vehicle emissions (01:42:10)
  • The role that courts have played in air pollution regulation in India (01:50:09)
  • Industrial emissions (01:57:10)
  • The political economy of air pollution in northern India (02:02:14)
  • Can philanthropists drive policy change? (02:13:42)
  • Santosh’s grants (02:29:45)
  • Examples of other countries that have managed to greatly reduce air pollution (02:45:44)
  • Career advice for listeners in India (02:51:11)

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

05 May 2023#150 – Tom Davidson on how quickly AI could transform the world03:01:59

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.

For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”

But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.

Links to learn more, summary and full transcript.

As a teaser, consider the following:

Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.

You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.

But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.

And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.

And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.

To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii.

Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.

Wild.

Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

Luisa and Tom also discuss:

• How we might go from GPT-4 to AI disaster
• Tom’s journey from finding AI risk to be kind of scary to really scary
• Whether international cooperation or an anti-AI social movement can slow AI progress down
• Why it might take just a few years to go from pretty good AI to superhuman AI
• How quickly the number and quality of computer chips we’ve been using for AI have been increasing
• The pace of algorithmic progress
• What ants can teach us about AI
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:53)
  • How we might go from GPT-4 to disaster (00:13:50)
  • Explosive economic growth (00:24:15)
  • Are there any limits for AI scientists? (00:33:17)
  • This seems really crazy (00:44:16)
  • How is this going to go for humanity? (00:50:49)
  • Why AI won’t go the way of nuclear power (01:00:13)
  • Can we definitely not come up with an international treaty? (01:05:24)
  • How quickly we should expect AI to “take off” (01:08:41)
  • Tom’s report on AI takeoff speeds (01:22:28)
  • How quickly will we go from 20% to 100% of tasks being automated by AI systems? (01:28:34)
  • What percent of cognitive tasks AI can currently perform (01:34:27)
  • Compute (01:39:48)
  • Using effective compute to predict AI takeoff speeds (01:48:01)
  • How quickly effective compute might increase (02:00:59)
  • How quickly chips and algorithms might improve (02:12:31)
  • How to check whether large AI models have dangerous capabilities (02:21:22)
  • Reasons AI takeoff might take longer (02:28:39)
  • Why AI takeoff might be very fast (02:31:52)
  • Fast AI takeoff speeds probably means shorter AI timelines (02:34:44)
  • Going from human-level AI to superhuman AI (02:41:34)
  • Going from AGI to AI deployment (02:46:59)
  • Were these arguments ever far-fetched to Tom? (02:49:54)
  • What ants can teach us about AI (02:52:45)
  • Rob’s outro (03:00:32)


Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

01 Feb 2024#178 – Emily Oster on what the evidence actually says about pregnancy and parenting02:22:36

"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you're doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them. And you can then feel like, 'I've thought about this, and this is a life that I want. This is a life that we're trying to craft for our family, for our kids.' And that is distinct from thinking you're doing a good job in every moment — which you can't achieve. But you can achieve, 'I'm doing this the way that I think works for my family.'" — Emily Oster

In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.

Links to learn more, summary, and full transcript.

They cover:

  • Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula.
  • Whether it’s fine to continue with antidepressants and coffee during pregnancy.
  • What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.
  • Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff.
  • How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family.
  • The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women.
  • Practical advice around managing the tradeoffs between career and family.
  • What to consider when deciding whether and when to have kids.
  • Relationship challenges after having kids, and the protective factors that help.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

23 May 2024#188 – Matt Clancy on whether science is good02:40:15

"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

"Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt Clancy

In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

Links to learn more, highlights, and full transcript.

They cover:

  • Whether scientific progress is actually net positive for humanity.
  • Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
  • Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
  • Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
  • Why Matt is sceptical that AGI could really cause explosive economic growth.
  • And much more.

Chapters:

  • Is scientific progress net positive for humanity? (00:03:00)
  • The time of biological perils (00:17:50)
  • Modelling the benefits of science (00:25:48)
  • Income and health gains from scientific progress (00:32:49)
  • Discount rates (00:42:14)
  • How big are the returns to science? (00:51:08)
  • Forecasting global catastrophic biological risks from scientific progress (01:05:20)
  • What’s the value of scientific progress, given the risks? (01:15:09)
  • Factoring in extinction risk (01:21:56)
  • How science could reduce extinction risk (01:30:18)
  • Are we already too late to delay the time of perils? (01:42:38)
  • Domain experts vs superforecasters (01:46:03)
  • What Open Philanthropy’s Innovation Policy programme settled on (01:53:47)
  • Explosive economic growth (02:06:28)
  • Matt’s favourite thought experiment (02:34:57)

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

12 Feb 2024#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety02:56:48

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.

From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.

So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?

Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.

Links to learn more, video, highlights, and full transcript.

In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:

  • How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.
  • How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.
  • The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.
  • How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.
  • The “smoke detector principle” of why we experience so many false alarms along with true threats.
  • The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.
  • Evolutionary theories on why we age and die.
  • And much more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong
Transcriptions: Katy Moore

26 Mar 2021#95 – Kelly Wanser on whether to deliberately intervene in the climate01:24:08

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado.

 100 years? 50 years? 20?

 Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well.

 Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate.

 Links to learn more, summary and full transcript.

Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have.

Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy.

After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere.

Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them 'whiter' so they reflect even more sunlight back into space.

These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter.

Kelly says that scientists estimate that we're already lowering the global temperature this way by 0.5–1.1ºC, without even intending to.

While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn't been funding to measure how much temperature change you get for a given amount of spray.

And we won't want to dive into these methods head first because the atmosphere is a complex system we can't yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied.

The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as:

• It being riskier than doing nothing
• That it will inevitably be dangerously political
• And the risk of the 'double catastrophe', where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time.

Kelly and Rob also talk about:

• The many climate interventions that are already happening
• The most promising ideas in the field
• And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place.

Chapters:
• Rob’s intro (00:00:00)
• The interview begins (00:01:37)
• Existing climate interventions (00:06:44)
• Most promising ideas (00:16:23)
• Doing good by accident (00:28:39)
• Objections to this approach (00:31:16)
• How much could countries do individually? (00:47:19)
• Government funding (00:50:08)
• Is global coordination possible? (00:53:01)
• Malicious use (00:57:07)
• Careers and SilverLining (01:04:03)
• Rob’s outro (01:23:34)

Producer: Keiran Harris.
 Audio mastering: Ben Cordell.
 Transcriptions: Sofia Davis-Fogel.

10 Jul 2023#156 – Markus Anderljung on how to regulate cutting-edge AI models02:06:36

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it.

And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus Anderljung

In today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.

Links to learn more, summary and full transcript.

They cover:

  • The need for AI governance, including self-replicating models and ChaosGPT
  • Whether or not AI companies will willingly accept regulation
  • The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring
  • Whether we can be confident that people won't train models covertly and ignore the licencing system
  • The progress we’ve made so far in AI governance
  • The key weaknesses of these approaches
  • The need for external scrutiny of powerful models
  • The emergent capabilities problem
  • Why it really matters where regulation happens
  • Advice for people wanting to pursue a career in this field
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Simon Monsour and Milo McGuire

Transcriptions: Katy Moore

05 Jul 2024#191 (Part 2) – Carl Shulman on government and society after AGI02:20:32

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!

If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?

It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.

Links to learn more, highlights, and full transcript.

As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.

If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. 

That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.

Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.

To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.

In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.

Carl Shulman and host Rob Wiblin discuss the above, as well as:

  • The risk of society using AI to lock in its values.
  • The difficulty of preventing coups once AI is key to the military and police.
  • What international treaties we need to make this go well.
  • How to make AI superhuman at forecasting the future.
  • Whether AI will be able to help us with intractable philosophical questions.
  • Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
  • Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'
  • Opportunities for listeners to contribute to making the future go well.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:01:16)
  • The interview begins (00:03:24)
  • COVID-19 concrete example (00:11:18)
  • Sceptical arguments against the effect of AI advisors (00:24:16)
  • Value lock-in (00:33:59)
  • How democracies avoid coups (00:48:08)
  • Where AI could most easily help (01:00:25)
  • AI forecasting (01:04:30)
  • Application to the most challenging topics (01:24:03)
  • How to make it happen (01:37:50)
  • International negotiations and coordination and auditing (01:43:54)
  • Opportunities for listeners (02:00:09)
  • Why Carl doesn't support enforced pauses on AI research (02:03:58)
  • How Carl is feeling about the future (02:15:47)
  • Rob’s outro (02:17:37)


Producer and editor: Keiran Harris

Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong

Transcriptions: Katy Moore

13 Feb 2020#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)02:26:33

nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad though it is, it's much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both.

 Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can't do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all.

 • Links to learn more, summary and full transcript.

This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like.

In today's episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University's Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we're to keep the risk at acceptable levels. The ideas are:

Science

1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go.
2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes.
3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria.

Response

4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely.
5. Rigorously evaluate in what situations travel bans are warranted. (They're more often counterproductive.)
6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible.
7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms.
8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out.

Oversight 

9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens.
10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen.
11. Require full cost-benefit analysis of 'dual-use' research projects that can generate global risks.
 
12. And finally, to maintain momentum, it's necessary to clearly assign responsibility for the above to particular individuals and organisations.

These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem.

In the episode Rob and Cassidy also talk about:

• How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential.
• The pros, and significant cons, of travel restrictions.
• Whether the same policies work for natural and anthropogenic pandemics.
• Ways listeners can pursue a career in biosecurity.
• Where we stand with nCoV as of today.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:03:27)
  • Where we stand with nCov today (00:07:24)
  • Policy idea 1: A drastic change to diagnostic testing (00:34:58)
  • Policy idea 2: Vaccine platforms (00:47:08)
  • Policy idea 3: Broad-spectrum therapeutics (00:54:48)
  • Policy idea 4: Develop a national plan for responding to a severe pandemic, regardless of the cause (01:02:15)
  • Policy idea 5: A different approach to travel bans (01:15:59)
  • Policy idea 6: Data sharing (01:16:48)
  • Policy idea 7: Prevention (01:24:45)
  • Policy idea 8: transparency around lab accidents (01:33:58)
  • Policy idea 9: DNA synthesis screening (01:39:22)
  • Policy idea 10: Dual Use Research oversight (01:48:47)
  • Policy idea 11: Pandemic tabletop exercises (02:00:00)
  • Policy idea 12: Coordination (02:12:20)


 Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
 Transcriptions: Zakee Ulhaq.

18 Oct 2023#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption01:54:49

"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.

Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren Kell

Links to learn more, summary and full transcript.

In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.

They cover:

  • The basic case for alternative proteins, and why they’re so hard to make
  • Why fermentation is a surprisingly promising technology for creating delicious alternative proteins 
  • The main scientific challenges that need to be solved to make fermentation even more useful
  • The progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordable
  • How GFI Europe is helping with some of these challenges
  • How people can use their careers to contribute to replacing factory farming with alternative proteins
  • The best part of Seren’s job
  • Plenty more

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:08)
  • The interview begins (00:02:22)
  • Why alternative proteins? (00:02:36)
  • What makes alternative proteins so hard to make? (00:11:30)
  • Why fermentation is so exciting (00:24:23)
  • The technical challenges involved in scaling fermentation (00:44:38)
  • Progress in cultivated meat (01:06:04)
  • GFI Europe’s work (01:32:47)
  • Careers (01:45:10)
  • The best part of Seren’s job (01:50:07)


Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Luisa Rodriguez and Katy Moore
Transcriptions: Katy Moore

18 Oct 2021#113 – Varsha Venugopal on using gossip to help vaccinate every child in India02:05:44

Our failure to make sure all kids globally get all of their basic vaccinations leads to 1.5 million child deaths every year.

According to today’s guest, Varsha Venugopal, for the great majority this has nothing to do with weird conspiracy theories or medical worries — in India 80% of undervaccinated children are already getting some shots. They just aren't getting all of them, for the tragically mundane reason that life can get in the way.

Links to learn more, summary and full transcript.

As Varsha says, we're all sometimes guilty of "valuing our present very differently from the way we value the future", leading to short-term thinking whether about getting vaccines or going to the gym.

So who should we call on to help fix this universal problem? The government, extended family, or maybe village elders?

Varsha says that research shows the most influential figures might actually be local gossips.

In 2018, Varsha heard about the ideas around effective altruism for the first time. By the end of 2019, she’d gone through Charity Entrepreneurship’s strategy incubation program, and quit her normal, stable job to co-found Suvita, a non-profit focused on improving the uptake of immunization in India, which focuses on two models:
1. Sending SMS reminders directly to parents and carers
2. Gossip

The first one is intuitive. You collect birth registers, digitize the paper records, process the data, and send out personalised SMS messages to hundreds of thousands of families. The effect size varies depending on the context but these messages usually increase vaccination rates by 8-18%.

The second approach is less intuitive and isn't yet entirely understood either.

Here’s what happens: Suvita calls up random households and asks, “if there were an event in town, who would be most likely to tell you about it?”

In over 90% of the cases, the households gave both the name and the phone number of a local ‘influencer’.

And when tracked down, more than 95% of the most frequently named 'influencers' agreed to become vaccination ambassadors. Those ambassadors then go on to share information about when and where to get vaccinations, in whatever way seems best to them.

When tested by a team of top academics at the Poverty Action Lab (J-PAL) it raised vaccination rates by 10 percentage points, or about 27%.

The advantage of SMS reminders is that they’re easier to scale up. But Varsha says the ambassador program isn’t actually that far from being a scalable model as well.

A phone call to get a name, another call to ask the influencer join, and boom — you might have just covered a whole village rather than just a single family.

Varsha says that Suvita has two major challenges on the horizon:
1. Maintaining the same degree of oversight of their surveyors as they attempt to scale up the program, in order to ensure the program continues to work just as well
2. Deciding between focusing on reaching a few more additional districts now vs. making longer term investments which could build up to a future exponential increase.

In this episode, Varsha and Rob talk about making these kinds of high-stakes, high-stress decisions, as well as:
• How Suvita got started, and their experience with Charity Entrepreneurship
• Weaknesses of the J-PAL studies
• The importance of co-founders
• Deciding how broad a program should be
• Varsha’s day-to-day experience
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:47)
  • The problem of undervaccinated kids (00:03:16)
  • Suvita (00:12:47)
  • Evidence on SMS reminders (00:20:30)
  • Gossip intervention (00:28:43)
  • Why parents aren’t already prioritizing vaccinations (00:38:29)
  • Weaknesses of studies (00:43:01)
  • Biggest challenges for Suvita (00:46:05)
  • Staff location (01:06:57)
  • Charity Entrepreneurship (01:14:37)
  • The importance of co-founders (01:23:23)
  • Deciding how broad a program should be (01:28:29)
  • Careers at Suvita (01:34:11)
  • Varsha’s advice (01:42:30)
  • Varsha’s day-to-day experience (01:56:19)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

29 Dec 2022#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons02:40:17

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.

As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted."

Links to learn more, summary and full transcript.

We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint.

As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no.

Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons.

But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for.

What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide.

Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound.

In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining:
• Why inter-service rivalry is one of the biggest constraints on US nuclear policy
• Two times the US sabotaged nuclear nonproliferation among great powers
• How his field uses jargon to exclude outsiders
• How the US could prevent the revival of mass nuclear testing by the great powers
• Why nuclear deterrence relies on the possibility that something might go wrong
• Whether 'salami tactics' render nuclear weapons ineffective
• The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles
• The problems that arise when you won't talk to people you think are evil
• Why missile defences are politically popular despite being strategically foolish
• How open source intelligence can prevent arms races
• And much more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:49)
  • Misconceptions in the effective altruism community (00:05:42)
  • Nuclear deterrence (00:17:36)
  • Dishonest rituals (00:28:17)
  • Downsides of generalist research (00:32:13)
  • “Mutual assured destruction” (00:38:18)
  • Budgetary considerations for competing parts of the US military (00:51:53)
  • Where the effective altruism community can potentially add the most value (01:02:15)
  • Gatekeeping (01:12:04)
  • Strengths of the nuclear security community (01:16:14)
  • Disarmament (01:26:58)
  • Nuclear winter (01:38:53)
  • Attacks against US allies (01:41:46)
  • Most likely weapons to get used (01:45:11)
  • The role of moral arguments (01:46:40)
  • Salami tactics (01:52:01)
  • Jeffrey's disagreements with Thomas Schelling (01:57:00)
  • Why did it take so long to get nuclear arms agreements? (02:01:11)
  • Detecting secret nuclear facilities (02:03:18)
  • Where Jeffrey would give $10M in grants (02:05:46)
  • The importance of archival research (02:11:03)
  • Jeffrey's policy ideas (02:20:03)
  • What should the US do regarding China? (02:27:10)
  • What should the US do regarding Russia? (02:31:42)
  • What should the US do regarding Taiwan? (02:35:27)
  • Advice for people interested in working on nuclear security (02:37:23)
  • Rob’s outro (02:39:13)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

28 Jul 2021#106 – Cal Newport on an industrial revolution for office work01:53:27

If you wanted to start a university department from scratch, and attract as many superstar researchers as possible, what’s the most attractive perk you could offer?

How about just not needing an email address.

According to today's guest, Cal Newport — computer science professor and best-selling author of A World Without Email — it should seem obscene and absurd for a world-renowned vaccine researcher with decades of experience to spend a third of their time fielding requests from HR, building management, finance, and so on. Yet with offices organised the way they are today, nothing could be more natural.

Links to learn more, summary and full transcript.

But this isn’t just a problem at the elite level — this affects almost all of us. A typical U.S. office worker checks their email 80 times a day, once every six minutes on average. Data analysis by RescueTime found that a third of users checked email or Slack every three minutes or more, averaged over a full work day.

Each time that happens our focus is broken, killing our momentum on the knowledge work we're supposedly paid to do.

When we lament how much email and chat have reduced our focus and filled our days with anxiety and frenetic activity, we most naturally blame 'weakness of will'. If only we had the discipline to check Slack and email once a day, all would be well — or so the story goes.

Cal believes that line of thinking fundamentally misunderstands how we got to a place where knowledge workers can rarely find more than five consecutive minutes to spend doing just one thing.

Since the Industrial Revolution, a combination of technology and better organization have allowed the manufacturing industry to produce a hundred-fold as much with the same number of people.

Cal says that by comparison, it's not clear that specialised knowledge workers like scientists, authors, or senior managers are *any* more productive than they were 50 years ago. If the knowledge sector could achieve even a tiny fraction of what manufacturing has, and find a way to coordinate its work that raised productivity by just 1%, that would generate on the order of $100 billion globally each year.

Since the 1990s, when everyone got an email address and most lost their assistants, that lack of direction has led to what Cal calls the 'hyperactive hive mind': everyone sends emails and chats to everyone else, all through the day, whenever they need something.

Cal points out that this is so normal we don't even think of it as a way of organising work, but it is: it's what happens when management does nothing to enable teams to decide on a better way of organising themselves.

A few industries have made progress taming the 'hyperactive hive mind'. But on Cal's telling, this barely scratches the surface of the improvements that are possible within knowledge work. And reigning in the hyperactive hive mind won't just help people do higher quality work, it will free them from the 24/7 anxiety that there's someone somewhere they haven't gotten back to.

In this interview Cal and Rob also cover:
• Is this really one of the world's most pressing problems?
• The historical origins of the 'hyperactive hive mind'
• The harm caused by attention switching
• Who's working to solve the problem and how
• Cal's top productivity advice for high school students, university students, and early career workers
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:02)
  • The hyperactive hivemind (00:04:11)
  • Scale of the harm (00:08:40)
  • Is email making professors stupid? (00:22:09)
  • Why haven't we already made these changes? (00:29:38)
  • Do people actually prefer the hyperactive hivemind? (00:43:31)
  • Solutions (00:55:52)
  • Advocacy (01:10:47)
  • How to Be a High School Superstar (01:23:03)
  • How to Win at College (01:27:46)
  • So Good They Can't Ignore You (01:31:47)
  • Personal barriers (01:42:51)
  • George Marshall (01:47:11)
  • Rob’s outro (01:49:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

07 Aug 2023#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less02:51:20

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."

Links to learn more, summary and full transcript.

Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team.

Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: 

Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.


Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.

The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.”

But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. 

And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.

Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.

Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:

  • If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.
  • How do you know that these technical problems can be solved at all, even in principle?
  • At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do?


In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:

  • OpenAI's current plans to achieve 'superalignment' and the reasoning behind them
  • Why alignment work is the most fundamental and scientifically interesting research in ML
  • The kinds of people he’s excited to hire to join his team and maybe save the world
  • What most readers misunderstood about the OpenAI announcement
  • The three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversight
  • What the standard should be for confirming whether Jan's team has succeeded
  • Whether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solved
  • Whether Jan thinks OpenAI has deployed models too quickly or too slowly
  • The many other actors who also have to do their jobs really well if we're going to have a good AI future
  • Plenty more


Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

24 Jan 2020#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities03:25:36

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?

 A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.

And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.

Links to learn more, summary and full transcript.
Job opportunities at the Global Priorities Institute.

To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person.

That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.

As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today.

Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.

So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it?

This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.

Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here.

So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover:

• Are, or are we not, living in the most influential time in history?
• The culture of the effective altruism community
• Will's new lower estimate of the risk of human extinction
• Why Will is now less focused on AI
• The differences between Americans and Brits
• Why feeling guilty about characteristics you were born with is crazy
• And plenty more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:03)
  • The paralysis argument (00:15:42)
  • The case for strong longtermism (00:55:21)
  • Longtermism for risk-averse altruists (00:58:01)
  • Are we living in the most influential time in history? (01:14:37)
  • The risk of human extinction in the next hundred years (02:15:20)
  • Implications for the effective altruism community (02:50:03)
  • Culture of the effective altruism community (03:06:28)

Producer: Keiran Harris. 
Audio mastering: Ben Cordell. 
Transcriptions: Zakee Ulhaq.

15 Aug 2024#196 – Jonathan Birch on the edge cases of sentience and why they matter02:01:50

"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan Birch

In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)

Links to learn more, highlights, and full transcript.

They cover:

  • Candidates for sentience, such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs
  • Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
  • Chilling tales about overconfident policies that probably caused significant suffering for decades.
  • How policymakers can act ethically given real uncertainty.
  • Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
  • How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
  • Why Jonathan is so excited about citizens’ assemblies.
  • Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:20)
  • The interview begins (00:03:04)
  • Why does sentience matter? (00:03:31)
  • Inescapable uncertainty about other minds (00:05:43)
  • The “zone of reasonable disagreement” in sentience research (00:10:31)
  • Disorders of consciousness: comas and minimally conscious states (00:17:06)
  • Foetuses and the cautionary tale of newborn pain (00:43:23)
  • Neural organoids (00:55:49)
  • AI sentience and whole brain emulation (01:06:17)
  • Policymaking at the edge of sentience (01:28:09)
  • Citizens’ assemblies (01:31:13)
  • The UK’s Sentience Act (01:39:45)
  • Ways Jonathan has changed his mind (01:47:26)
  • Careers (01:54:54)
  • Discussing animal sentience with the Dalai Lama (01:59:08)
  • Luisa’s outro (02:01:04)


Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

04 Apr 2025#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway02:16:03

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.

So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.

Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.

Links to learn more, highlights, video, and full transcript.

As Buck puts it: "Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem."

Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.

Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:

  • Why he’s more worried about AI hacking its own data centre than escaping
  • What to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment research
  • Why he might want to use a model he thought could be conspiring against him
  • Why he would feel safer if he caught an AI attempting to escape
  • Why many control techniques would be relatively inexpensive
  • How to use an untrusted model to monitor another untrusted model
  • What the minimum viable intervention in a “lazy” AI company might look like
  • How even small teams of safety-focused staff within AI labs could matter
  • The moral considerations around controlling potentially conscious AI systems, and whether it’s justified

Chapters:

  • Cold open |00:00:00|  
  • Who’s Buck Shlegeris? |00:01:27|  
  • What's AI control? |00:01:51|  
  • Why is AI control hot now? |00:05:39|  
  • Detecting human vs AI spies |00:10:32|  
  • Acute vs chronic AI betrayal |00:15:21|  
  • How to catch AIs trying to escape |00:17:48|  
  • The cheapest AI control techniques |00:32:48|  
  • Can we get untrusted models to do trusted work? |00:38:58|  
  • If we catch a model escaping... will we do anything? |00:50:15|  
  • Getting AI models to think they've already escaped |00:52:51|  
  • Will they be able to tell it's a setup? |00:58:11|  
  • Will AI companies do any of this stuff? |01:00:11|  
  • Can we just give AIs fewer permissions? |01:06:14|  
  • Can we stop human spies the same way? |01:09:58|  
  • The pitch to AI companies to do this |01:15:04|  
  • Will AIs get superhuman so fast that this is all useless? |01:17:18|  
  • Risks from AI deliberately doing a bad job |01:18:37|  
  • Is alignment still useful? |01:24:49|  
  • Current alignment methods don't detect scheming |01:29:12|  
  • How to tell if AI control will work |01:31:40|  
  • How can listeners contribute? |01:35:53|  
  • Is 'controlling' AIs kind of a dick move? |01:37:13|  
  • Could 10 safety-focused people in an AGI company do anything useful? |01:42:27|  
  • Benefits of working outside frontier AI companies |01:47:48|  
  • Why Redwood Research does what it does |01:51:34|  
  • What other safety-related research looks best to Buck? |01:58:56|  
  • If an AI escapes, is it likely to be able to beat humanity from there? |01:59:48|  
  • Will misaligned models have to go rogue ASAP, before they're ready? |02:07:04|  
  • Is research on human scheming relevant to AI? |02:08:03|

This episode was originally recorded on February 21, 2025.

Video: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong
Transcriptions and web: Katy Moore

13 Aug 2020#84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked02:58:14

When COVID-19 struck the US, everyone was told that hand sanitizer needed to be saved for healthcare professionals, so they should just wash their hands instead. But in India, many homes lack reliable piped water, so they had to do the opposite: distribute hand sanitizer as widely as possible.

 American advocates for banning single-use plastic straws might be outraged at the widespread adoption of single-use hand sanitizer sachets in India. But the US and India are very different places, and it might be the only way out when you're facing a pandemic without running water.

 According to today’s guest, Shruti Rajagopalan, Senior Research Fellow at the Mercatus Center at George Mason University, that's typical and context is key to policy-making. This prompted Shruti to propose a set of policy responses designed for India specifically back in April.

Unfortunately she thinks it's surprisingly hard to know what one should and shouldn't imitate from overseas.

Links to learn more, summary and full transcript.

For instance, some places in India installed shared handwashing stations in bus stops and train stations, which is something no developed country would advise. But in India, you can't necessarily wash your hands at home — so shared faucets might be the lesser of two evils. (Though note scientists have downgraded the importance of hand hygiene lately.)

Stay-at-home orders offer a more serious example. Developing countries find themselves in a serious bind that rich countries do not.

With nearly no slack in healthcare capacity, India lacks equipment to treat even a small number of COVID-19 patients. That suggests strict controls on movement and economic activity might be necessary to control the pandemic.

But many people in India and elsewhere can't afford to shelter in place for weeks, let alone months. And governments in poorer countries may not be able to afford to send everyone money — even where they have the infrastructure to do so fast enough.

India ultimately did impose strict lockdowns, lasting almost 70 days, but the human toll has been larger than in rich countries, with vast numbers of migrant workers stranded far from home with limited if any income support.

There were no trains or buses, and the government made no provision to deal with the situation. Unable to afford rent where they were, many people had to walk hundreds of kilometers to reach home, carrying children and belongings with them.

But in some other ways the context of developing countries is more promising. In the US many people melted down when asked to wear facemasks. But in South Asia, people just wore them.

Shruti isn’t sure whether that's because of existing challenges with high pollution, past experiences with pandemics, or because intergenerational living makes the wellbeing of others more salient, but the end result is that masks weren’t politicised in the way they were in the US.

In addition, despite the suffering caused by India's policy response to COVID-19, public support for the measures and the government remains high — and India's population is much younger and so less affected by the virus.

In this episode, Howie and Shruti explore the unique policy challenges facing India in its battle with COVID-19, what they've tried to do, and how it has gone.

They also cover:

• What an economist can bring to the table during a pandemic
• The mystery of India’s surprisingly low mortality rate
• Policies that should be implemented today
• What makes a good constitution

Chapters:
 • Rob’s intro (00:00:00)
• The interview begins (00:02:27)
• What an economist can bring to the table for COVID-19 (00:07:54)
• What India has done about the coronavirus (00:12:24)
• Why it took so long for India to start seeing a lot of cases (00:25:08)
• How India is doing at the moment with COVID-19 (00:27:55)
• Is the mortality rate surprisingly low in India? (00:40:32)
• Why Southeast Asians countries have done so well so far (00:55:43)
• Different attitudes to masks globally (00:59:25)
• Differences in policy approaches for developing countries (01:07:27)
• India’s strict lockdown (01:25:56)
• Lockdown for the average rural Indian (01:39:11)
• Public reaction to the lockdown in India (01:44:39)
• Policies that should be implemented today (01:50:29)
• India’s overall reaction to COVID-19 (01:57:23)
• Constitutional economics (02:03:28)
• What makes a good constitution (02:11:47)
• Emergent Ventures (02:27:34)
• Careers (02:47:57)
• Rob’s outro (02:57:51)

 Producer: Keiran Harris.
 Audio mastering: Ben Cordell.
 Transcriptions: Zakee Ulhaq.

15 Apr 2020Article: Reducing global catastrophic biological risks01:04:15

In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute.

Greg also wrote a new problem profile on that topic for our website, and reading that is a good lead-in to our interview with him. So in a bit of an experiment we decided to make this audio version of that article, narrated by the producer of the 80,000 Hours Podcast, Keiran Harris.

We’re thinking about having audio versions of other important articles we write, so it’d be great if you could let us know if you’d like more of these. You can email us your view at podcast@80000hours.org.

If you want to check out all of Greg’s graphs and footnotes that we didn’t include, and get links to learn more about GCBRs - you can find those here.

And if you want to read more about COVID-19, the 80,000 Hours team has produced a fantastic package of 10 pieces about how to stop the pandemic. You can find those here.

10 Feb 2025AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out03:12:24

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?

With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.

Check out the full transcript on the 80,000 Hours website.

You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:

  • Ajeya Cotra on overrated AGI worries
  • Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
  • Ian Morris on why the future must be radically different from the present
  • Nick Joseph on whether his companies internal safety policies are enough
  • Richard Ngo on what everyone gets wrong about how ML models work
  • Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
  • Carl Shulman on why you’ll prefer robot nannies over human ones
  • Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
  • Hugo Mercier on why even superhuman AGI won’t be that persuasive
  • Rob Long on the case for and against digital sentience
  • Anil Seth on why he thinks consciousness is probably biological
  • Lewis Bollard on whether AI advances will help or hurt nonhuman animals
  • Rohin Shah on whether humanity’s work ends at the point it creates AGI

And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:58)
  • Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)
  • Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)
  • Rob & Luisa: Agentic AI and designing machine people (00:24:06)
  • Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)
  • Ian Morris on why we won’t end up living like The Jetsons (00:47:03)
  • Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)
  • Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)
  • Richard Ngo on the most important misconception in how ML models work (01:03:10)
  • Rob & Luisa: Issues Rob is less worried about now (01:07:22)
  • Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)
  • Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)
  • Carl Shulman on why people will prefer robot nannies over humans (01:28:25)
  • Rob & Luisa: Should we expect AI-related job loss? (01:36:19)
  • Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)
  • Holden Karnofsky on the power that comes from just making models bigger (01:45:21)
  • Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)
  • Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)
  • Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)
  • Robert Long on whether digital sentience is possible (02:15:09)
  • Anil Seth on why he believes in the biological basis of consciousness (02:27:21)
  • Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)
  • Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)
  • Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)
  • Rob's outro (03:11:02)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

01 May 2017#0 – Introducing the 80,000 Hours Podcast00:03:54
80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org.

Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving.

If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now.

That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'.

You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro.

Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first:

#21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks

#6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it

#17 – Will MacAskill on why our descendants might view us as moral monsters

#39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence

#44 – Paul Christiano on developing real solutions to the 'AI alignment problem'

#60 – What Professor Tetlock learned from 40 years studying how to predict the future

#46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia

#71 – Benjamin Todd on the key ideas of 80,000 Hours

#50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter

80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

10 Jan 2022#35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission01:23:34

Rebroadcast: this episode was originally released in June 2018.

How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’.

At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure.

That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator.

In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious.

Links to learn more, summary and full transcript.

People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face.

But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms.

We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article 'Why operations management is one of the biggest bottlenecks in effective altruism’, as well as:

• Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform.
• How a student can save a hospital millions with a simple spreadsheet model.
• The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better.
• What most people misunderstand about operations, and how to tell if you have what it takes.
• And finally, operations jobs people should consider applying for.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

02 Mar 2020#71 - Benjamin Todd on the key ideas of 80,000 Hours02:57:29
The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible.

Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift.

All of us added something to it, but the single biggest contributor was our CEO and today's guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012.

This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we've discovered since we started investigating high impact careers.

• Links to learn more, summary and full transcript.

But it's perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words.

Fortunately it's designed to be highly modular and it's easy to work through it over multiple sessions, scanning over the articles it links to on each topic.

Perhaps though, you'd prefer to absorb our most essential ideas in conversation form, in which case this episode is for you.

If you want to have a big impact with your career, and you say you're only going to read one article from us, we recommend you read our key ideas page.

And likewise, if you're only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through:

• Common misunderstandings of our advice
• A high level overview of what 80,000 Hours generally recommends
• Our key moral positions
• What are the most pressing problems to work on and why?
• Which careers effectively contribute to solving those problems?
• Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration
• As well as plenty more.

One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we're least sure about, or didn’t yet cover within the article.

Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we're aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page!

Get the episode by subscribing: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

05 Apr 2022#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs02:15:16

Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't.

Incredible though it might seem, according to today's guest — economist Bryan Caplan, the author of Selfish Reasons To Have More Kids, The Myth of the Rational Voter, and The Case Against Education — the best evidence we have on the question suggests that, within reason, what parents do has little impact on how their children's lives play out once they're adults.

Links to learn more, summary and full transcript.

Of course, kids do resemble their parents. But just as we probably can't say it was attentive parenting that gave me my mother's nose, perhaps we can't say it was attentive parenting that made me succeed at school. Both the social environment we grow up in and the genes we receive from our parents influence the person we become, and looking at a typical family we can't really distinguish the impact of one from the other.

But nature does offer us up a random experiment that can let us tell the difference: identical twins share all their genes, while fraternal twins only share half their genes. If you look at how much more similar outcomes are for identical twins than fraternal twins, you see the effect of sharing 100% of your genetic material, rather than the usual 50%. Double that amount, and you've got the full effect of genetic inheritance. Whatever unexplained variation remains is still up for grabs — and might be down to different experiences in the home, outside the home, or just random noise.

The crazy thing about this research is that it says for a range of adult outcomes (e.g. years of education, income, health, personality, and happiness), it's differences in the genes children inherit rather than differences in parental behaviour that are doing most of the work. Other research suggests that differences in “out-of-home environment” take second place. Parenting style does matter for something, but it comes in a clear third.

Bryan is quick to point out that there are several factors that help reconcile these findings with conventional wisdom about the importance of parenting.

First, for some adult outcomes, parenting was a big deal (i.e. the quality of the parent/child relationship) or at least a moderate deal (i.e. drug use, criminality, and religious/political identity).

Second, parents can and do influence you quite a lot — so long as you're young and still living with them. But as soon as you move out, the influence of their behaviour begins to wane and eventually becomes hard to spot.

Third, this research only studies variation in parenting behaviour that was common among the families studied.

And fourth, research on international adoptions shows they can cause massive improvements in health, income and other outcomes.

But the findings are still remarkable, and imply many hyper-diligent parents could live much less stressful lives without doing their kids any harm at all. In this extensive interview Rob interrogates whether Bryan can really be right, or whether the research he's drawing on has taken a wrong turn somewhere.

And that's just one topic we cover, some of the others being:

• People’s biggest misconceptions about the labour market
• Arguments against open borders
• Whether most people actually vote based on self-interest
• Whether philosophy should stick to common sense or depart from it radically
• Personal autonomy vs. the possible benefits of government regulation
• Bryan's perfect betting record
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:15)
  • Labor Econ Versus the World (00:04:55)
  • Open Borders (00:20:30)
  • How much parenting matters (00:35:49)
  • Self-Interested Voter Hypothesis (01:00:31)
  • Why Bryan and Rob disagree so much on philosophy (01:12:04)
  • Libertarian free will (01:25:10)
  • The effective altruism community (01:38:46)
  • Bryan’s betting record (01:48:19)
  • Individual autonomy vs. welfare (01:59:06)
  • Arrogant hedgehogs (02:10:43)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

07 Mar 2020#72 - Toby Ord on the precipice and humanity's potential futures03:14:17
This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century.

I loved the book and learned a great deal from it (buy it here, US and audiobook release March 24). While preparing for this interview I copied out 87 facts that were surprising, shocking or important. Here's a sample of 16:

1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined.

2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s.

3. In 2008 a 'gamma ray burst' reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren't sure what generates gamma ray bursts but one cause may be two neutron stars colliding.

4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped…

N.B. I've had to cut off this list as we only get 4,000 characters in these show notes, so:

Click here to read the whole list, see a full transcript, and find related links.

And if you like the list, you can get a free copy of the introduction and first chapter by joining our mailing list.

While I've been studying these topics for years and known Toby for the last eight, a remarkable amount of what's in The Precipice was new to me.

Of course the book isn't a series of isolated amusing facts, but rather a systematic review of the many ways humanity's future could go better or worse, how we might know about them, and what might be done to improve the odds.

And that's how we approach this conversation, first talking about each of the main threats, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved.

Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which Arden Koehler and I barely even had to work for.

Some topics Arden and I ask about include:

• What Toby changed his mind about while writing the book
• Are people exaggerating when they say that climate change could actually end civilization?
• What can we learn from historical pandemics?
• Toby’s estimate of unaligned AI causing human extinction in the next century
• Is this century the most important time in human history, or is that a narcissistic delusion?
• Competing vision for humanity's ideal future
• And more.

Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

28 Oct 2022#139 – Alan Hájek on puzzles and paradoxes in probability and expected value03:38:26

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

 The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount!

 Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

 Links to learn more, summary and full transcript.

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong.

In today's conversation, Alan and Rob explore these issues and many others:

• Simple rules of thumb for having philosophical insights
• A key flaw that hid in Pascal's wager from the very beginning
• Whether we have to simply ignore infinities because they mess everything up
• What fundamentally is 'probability'?
• Some of the many reasons 'frequentism' doesn't work as an account of probability
• Why the standard account of counterfactuals in philosophy is deeply flawed
• And why counterfactuals present a fatal problem for one sort of consequentialism

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:48)
  • Philosophical methodology (00:02:54)
  • Theories of probability (00:37:17)
  • Everyday Bayesianism (00:46:01)
  • Frequentism (01:04:56)
  • Ranges of probabilities (01:16:23)
  • Implications for how to live (01:21:24)
  • Expected value (01:26:58)
  • The St. Petersburg paradox (01:31:40)
  • Pascal's wager (01:49:44)
  • Using expected value in everyday life (02:03:53)
  • Counterfactuals (02:16:38)
  • Most counterfactuals are false (02:52:25)
  • Relevance to objective consequentialism (03:09:47)
  • Marker 18 (03:10:21)
  • Alan’s best conference story (03:33:37)


Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

02 Oct 2023#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives03:03:42

"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already.

And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin Esvelt

In today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.

Links to learn more, summary and full transcript.

They cover:

  • Why it makes sense to focus on deliberately released pandemics
  • Case studies of people who actually wanted to kill billions of humans
  • How many people have the technical ability to produce dangerous viruses
  • The different threats of stealth and wildfire pandemics that could crash civilisation
  • The potential for AI models to increase access to dangerous pathogens
  • Why scientists try to identify new pandemic-capable pathogens, and the case against that research
  • Technological solutions, including UV lights and advanced PPE
  • Using CRISPR-based gene drive to fight diseases and reduce animal suffering
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

13 Dec 2022#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well02:44:19
Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.

But do they really 'understand' what they're saying, or do they just give the illusion of understanding?

Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society.

Links to learn more, summary and full transcript.

One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.

However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.

Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.

We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter.

In today's conversation we discuss the above, as well as:

• Could speeding up AI development be a bad thing?
• The balance between excitement and fear when it comes to AI advances
• What OpenAI focuses its efforts where it does
• Common misconceptions about machine learning
• How many computer chips it might require to be able to do most of the things humans do
• How Richard understands the 'alignment problem' differently than other people
• Why 'situational awareness' may be a key concept for understanding the behaviour of AI models
• What work to positively shape the development of AI Richard is and isn't excited about
The AGI Safety Fundamentals course that Richard developed to help people learn more about this field

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Milo McGuire and Ben Cordell
Transcriptions: Katy Moore

23 Apr 2019#57 – Tom Kalil on how to do the most good in government02:50:16

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible - before you quit or get kicked out?

That was the challenge put in front of Tom Kalil in 1993.

He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things.

But not everyone figures out how to move the needle. In today's interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in.

Links to learn more, summary and full transcript.

Interested in US AI policy careers? Apply for one-on-one career advice here.

Vacancies at the Center for Security and Emerging Technology.

Our high-impact job board, which features other related opportunities.

He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren't; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored.

Over years at the White House Office of Science and Technology Policy, 'Team Kalil' built up a white board of principles. For example, 'the schedule is your friend': setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate.

Or 'talk to who owns the paper'. People would wonder how Tom could get so many lines into the President's speeches. The answer was "figure out who's writing the speech, find them with the document, and tell them to add the line." Obvious, but not something most were doing.

Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person.

In today's episode we get down to nuts & bolts, and discuss:
• How did Tom spin work on a primary campaign into a job in the next White House?
• Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team?
• How do you get people to do things when you don't have formal power over them?
• What roles in the US government are most likely to help with the long-term future, or reducing existential risks?
• Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas?
• What are 'policy entrepreneurs' and why do they matter?
• What is the role for prizes in promoting science and technology? What are other promising policy ideas?
• Why you can get more done by not taking credit.
• What can the White House do if an agency isn't doing what it wants?
• How can the effective altruism community improve the maturity of our policy recommendations?
• How much can talented individuals accomplish during a short-term stay in government?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

28 Apr 2022#128 – Chris Blattman on the five reasons wars happen02:46:51

In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great.

Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out.

The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today's episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they've learned.

Links to learn more, summary and full transcript.

Chris's first point is that while organised violence may feel like it's all around us, it's actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace.

In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn't — so they can see what a healthy society looks like and what's missing in the places where war does take hold.

Chris argues that social scientists have generated five cogent models of when war can be 'rational' for both sides of a conflict:

1. Unchecked interests — such as national leaders who bear few of the costs of launching a war.
2. Intangible incentives — such as an intrinsic desire for revenge.
3. Uncertainty — such as both sides underestimating each other's resolve to fight.
4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future.
5. Misperceptions — such as our inability to see the world through other people's eyes.

In today's interview, we walk through how each of the five explanations work and what specific wars or actions they might explain.

In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity).

The interview also covers:

• What Chris and Rob got wrong about the war in Ukraine
• What causes might not fit into these five categories
• The role of people's choice to escalate or deescalate a conflict
• How great power wars or nuclear wars are different, and what can be done to prevent them
• How much representative government helps to prevent war
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:43)
  • What people get wrong about violence (00:04:40)
  • Medellín gangs (00:11:48)
  • Overrated causes of violence (00:23:53)
  • Cause of war #1: Unchecked interests (00:36:40)
  • Cause of war #2: Intangible incentives (00:41:40)
  • Cause of war #3: Uncertainty (00:53:04)
  • Cause of war #4: Commitment problems (01:02:24)
  • Cause of war #5: Misperceptions (01:12:18)
  • Weaknesses of the model (01:26:08)
  • Dancing on the edge of a cliff (01:29:06)
  • Confusion around escalation (01:35:26)
  • Applying the model to the war between Russia and Ukraine (01:42:34)
  • Great power wars (02:01:46)
  • Preventing nuclear war (02:18:57)
  • Why undirected approaches won't work (02:22:51)
  • Democratic peace theory (02:31:10)
  • Exchanging hostages (02:37:21)
  • What you can actually do to help (02:41:25)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

22 Aug 2024#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task02:29:26

The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?

That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.

Links to learn more, highlights, video, and full transcript.

As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.

As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.

Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he’s found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.

In addition to all of that, Nick and Rob talk about:

  • What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).
  • What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.
  • What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.

And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:01:00)
  • The interview begins (00:03:44)
  • Scaling laws (00:04:12)
  • Bottlenecks to further progress in making AIs helpful (00:08:36)
  • Anthropic’s responsible scaling policies (00:14:21)
  • Pros and cons of the RSP approach for AI safety (00:34:09)
  • Alternatives to RSPs (00:46:44)
  • Is an internal audit really the best approach? (00:51:56)
  • Making promises about things that are currently technically impossible (01:07:54)
  • Nick’s biggest reservations about the RSP approach (01:16:05)
  • Communicating “acceptable” risk (01:19:27)
  • Should Anthropic’s RSP have wider safety buffers? (01:26:13)
  • Other impacts on society and future work on RSPs (01:34:01)
  • Working at Anthropic (01:36:28)
  • Engineering vs research (01:41:04)
  • AI safety roles at Anthropic (01:48:31)
  • Should concerned people be willing to take capabilities roles? (01:58:20)
  • Recent safety work at Anthropic (02:10:05)
  • Anthropic culture (02:14:35)
  • Overrated and underrated AI applications (02:22:06)
  • Rob’s outro (02:26:36)

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video engineering: Simon Monsour
Transcriptions: Katy Moore

07 Mar 2025Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)00:36:50

When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.

As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)

And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”

But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.

And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.

This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.

And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.

Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.

This episode was originally recorded on March 6, 2025.

Chapters:

  • Intro (00:00:11)
  • More juicy OpenAI news (00:00:46)
  • The court order (00:02:11)
  • Elon has two hurdles to jump (00:05:17)
  • The judge's sympathy (00:08:00)
  • OpenAI's defence (00:11:45)
  • Alternative plans for OpenAI (00:13:41)
  • Should the foundation give up control? (00:16:38)
  • Alternative plaintiffs to Musk (00:21:13)
  • The 'special interest party' option (00:25:32)
  • How might this play out in the fall? (00:27:52)
  • The nonprofit board is in a bit of a bind (00:29:20)
  • Is it in the public interest to race? (00:32:23)
  • Could the board be personally negligent? (00:34:06)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore

12 May 2023#151 – Ajeya Cotra on accidentally teaching AI models to deceive us02:49:40

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job — for all sorts of reasons.

Today's guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.

Links to learn more, summary and full transcript.

As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you're monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.

Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!

Can't we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won't work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:

  • Saints — models that care about doing what we really want
  • Sycophants — models that just want us to say they've done a good job, even if they get that praise by taking actions they know we wouldn't want them to
  • Schemers — models that don't care about us or our interests at all, who are just pleasing us so long as that serves their own agenda

And according to Ajeya, there are also ways we could end up actively selecting for motivations that we don't want.

In today's interview, Ajeya and Rob discuss the above, as well as:

  • How to predict the motivations a neural network will develop through training
  • Whether AIs being trained will functionally understand that they're AIs being trained, the same way we think we understand that we're humans living on planet Earth
  • Stories of AI misalignment that Ajeya doesn't buy into
  • Analogies for AI, from octopuses to aliens to can openers
  • Why it's smarter to have separate planning AIs and doing AIs
  • The benefits of only following through on AI-generated plans that make sense to human beings
  • What approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overrated
  • How one might demo actually scary AI failure mechanisms

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris

Audio mastering: Ryan Kessler and Ben Cordell

Transcriptions: Katy Moore

15 Apr 2019#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it02:57:58
Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right?

Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences.

Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst.

There are fewer than 20 people in the world dedicating their lives to researching these problems.

But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns.

Links to learn more, summary and full transcript.

Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death.

But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare?

For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions.

There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours.

In today’s interview we explore wild animal welfare as a new field of research, and discuss:

• Do we have a moral duty towards wild animals or not?
• How should we measure the number of wild animals?
• What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate?
• Is there a danger in imagining how we as humans would feel if we were put into their situation?
• Should we eliminate parasites and predators?
• How important are insects?
• How strongly should we focus on just avoiding humans going in and making things worse?
• How does this compare to work on farmed animal suffering?
• The most compelling arguments for humanity not dedicating resources to wild animal welfare
• Is there much of a case for the idea that this work could improve the very long-term future of humanity?

Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss:

• The importance of figuring out your values
• Chemistry, psychology, and other different paths towards working on wild animal welfare
• How to break into new fields

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

01 Sep 2023#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI00:59:34

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.

But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.

Links to learn more, summary and full transcript.

On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.

And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.

In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:

1. Developing an Apollo programme for technical AI safety
2. Instituting capability audits for AI models
3. Buying time by exploiting hardware choke points
4. Getting critics involved in directly engineering AI models
5. Getting AI labs to be guided by motives other than profit
6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it
7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities
8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working
9. Creating a mass public movement that understands AI and can demand the necessary controls
10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria

As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. 

Rob and Mustafa discuss the above, as well as:

  • Whether we should be open sourcing AI models
  • Whether Mustafa's policy views are consistent with his timelines for transformative AI
  • How people with very different views on these issues get along at AI labs
  • The failed efforts (so far) to get a wider range of people involved in these decisions
  • Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4
  • Whether we'll be blown away by AI progress over the next year
  • What mandatory regulations government should be imposing on AI labs right now
  • Appropriate priorities for the UK's upcoming AI safety summit

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

18 Jan 2022#43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines02:35:28
Rebroadcast: this episode was originally released in September 2018.

In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”.

Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked.

Links to learn more, summary and full transcript.

The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today.

If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere.

As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity.

You might think the United States would have a more sensible nuclear launch policy. You’d be wrong.

As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth.

The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe.

The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival.

Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it.

Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity.

Strategically, the setup is stupid. Ethically, it is monstrous.

So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization?

Daniel explores these questions eloquently and urgently in his book. Today we cover:

• Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold
• How well are secrets kept in the government?
• What was the risk of the first atomic bomb test?
• Do we have a reliable estimate of the magnitude of a ‘nuclear winter’?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

20 Oct 2021We just put up a new compilation of ten core episodes of the show00:03:02

We recently launched a new podcast feed that might be useful to you and people you know.

It's called Effective Altruism: Ten Global Problems, and it's a collection of ten top episodes of this show, selected to help listeners quickly get up to speed on ten pressing problems that the effective altruism community is working to solve.

It's a companion to our other compilation Effective Altruism: An Introduction, which explores the big picture debates within the community and how to set priorities in order to have the greatest impact.

These ten episodes cover:

  • The cheapest ways to improve education in the developing world
  • How dangerous is climate change and what are the most effective ways to reduce it?
  • Using new technologies to prevent another disastrous pandemic
  • Ways to simultaneously reduce both police misconduct and crime
  • All the major approaches being taken to end factory farming
  • How advances in artificial intelligence could go very right or very wrong
  • Other big threats to the future of humanity — such as a nuclear war — and how can we make our species wiser and more resilient
  • One problem few even recognise as a problem at all

The selection is ideal for people who are completely new to the effective altruist way of thinking, as well as those who are familiar with effective altruism but new to The 80,000 Hours Podcast.

If someone in your life wants to get an understanding of what 80,000 Hours or effective altruism are all about, and prefers to listen to things rather than read, this is a great resource to direct them to.

You can find it by searching for effective altruism in whatever podcasting app you use, or by going to 80000hours.org/ten.

We'd love to hear how you go listening to it yourself, or sharing it with others in your life. Get in touch by emailing podcast@80000hours.org.

08 Sep 2023#163 – Toby Ord on the perils of maximising the good that you do03:07:08

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?

But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”

Links to learn more, summary and full transcript.

Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.

Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.

This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.

Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.

But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.

To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.

Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.

The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.

As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.

In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.

Toby and Rob also discuss:

  • The rise and fall of FTX and some of its impacts
  • What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground
  • What utilitarianism has going for it, and what's wrong with it in Toby's view
  • How to mathematically model the importance of personal integrity
  • Which AI labs Toby thinks have been acting more responsibly than others
  • How having a young child affects Toby’s feelings about AI risk
  • Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial
  • How Toby ended up being the source of the highest quality images of the Earth from space

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Transcriptions: Katy Moore

05 May 2021#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism02:38:22

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience.

 You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.”

 So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour.

 Which patient would you rather be?

 Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future.

 Christian Tarsney, a philosopher at Oxford University's Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences.

 Links to learn more, summary and full transcript.

That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past?

One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it.

But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn't care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about!

Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven't played yet are still on the way.

If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction. It’s a live debate that’s playing out in the philosophy of time, as well as in physics.

For Christian, there are two big practical implications of these past, present, and future ethical comparison cases.

The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people's past goals, including the goals of people who are now dead.

The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born?

Christian and Rob also cover several other big topics, including:

• A possible solution to moral fanaticism
• How much of humanity's resources we should spend on improving the long-term future
• How large the expected value of the continued existence of Earth-originating civilization might be
• How we should respond to uncertainty about the state of the world
• The state of global priorities research
• And much more

Chapters:
• Rob’s intro (00:00:00)
• The interview begins (00:01:20)
• Future bias (00:04:33)
• Philosophy of time (00:11:17)
• Money pumping (00:18:53)
• Time travel (00:21:22)
• Decision theory (00:24:36)
• Eternalism (00:32:32)
• Fanaticism (00:38:33)
• Stochastic dominance (00:52:11)
• Background uncertainty (00:56:27)
• Epistemic worries about longtermism (01:12:44)
• Best arguments against working on existential risk reduction (01:32:34)
• The scope of longtermism (01:41:12)
• The value of the future (01:50:09)
• Moral uncertainty (01:57:25)
• The Berry paradox (02:35:00)
• Competitive debating (02:28:34)
• The state of global priorities research (02:21:33)
• Christian’s personal priorities (02:17:27)

Producer: Keiran Harris.
 Audio mastering: Ryan Kessler.
 Transcriptions: Sofia Davis-Fogel.

03 Oct 2024#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation01:25:09

"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it.

"The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process." —Peter Godfrey-Smith

In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.

Links to learn more, highlights, and full transcript.

They cover:

  • Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.
  • How the role of culture has been crucial in enabling human technological progress.
  • Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.
  • Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.
  • Whether we can and should avoid death by uploading human minds.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:00:57)
  • The interview begins (00:02:12)
  • Wild animal suffering and rewilding (00:04:09)
  • Thinking about death (00:32:50)
  • Uploads of ourselves (00:38:04)
  • Culture and how minds make things happen (00:54:05)
  • Challenges for water-based animals (01:01:37)
  • The importance of sea-to-land transitions in animal life (01:10:09)
  • Luisa's outro (01:23:43)

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

09 Mar 2022#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising01:36:26

One of 80,000 Hours' main services is our free one-on-one careers advising, which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of people -- including many regular listeners to this show -- about how they might be able to do more good while also having a highly motivating career.

Before joining 80,000 Hours, Michelle Hutchinson completed a PhD in Philosophy at Oxford University and helped launch Oxford's Global Priorities Institute, while Habiba Islam studied politics, philosophy, and economics at Oxford University and qualified as a barrister.

Links to learn more, summary and full transcript.

In this conversation, they cover many topics that recur in their advising calls, and what they've learned from watching advisees’ careers play out:

• What they say when advisees want to help solve overpopulation
• How to balance doing good against other priorities that people have for their lives
• Why it's challenging to motivate yourself to focus on the long-term future of humanity, and how Michelle and Habiba do so nonetheless
• How they use our latest guide to planning your career
• Why you can specialise and take more risk if you're in a group
• Gaps in the effective altruism community it would be really useful for people to fill
• Stories of people who have spoken to 80,000 Hours and changed their career — and whether it went well or not
• Why trying to have impact in multiple different ways can be a mistake

The episode is split into two parts: the first section on The 80,000 Hours Podcast, and the second on our new show 80k After Hours. This is a shameless attempt to encourage listeners to our first show to subscribe to our second feed.

That second part covers:

• Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour
• How much impact the one-on-one team has, the biggest challenges they face as a group, and different paths they could have gone down
• Whether giving general advice is a doomed enterprise

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:24)
  • Cause prioritization (00:09:14)
  • Unexpected outcomes from 1-1 advice (00:18:10)
  • Making time for thinking about these things (00:22:28)
  • Balancing different priorities in life (00:26:54)
  • Gaps in the effective altruism space (00:32:06)
  • Plan change vignettes (00:37:49)
  • How large a role the 1-1 team is playing (00:49:04)
  • What about when our advice didn’t work out? (00:55:50)
  • The process of planning a career (00:59:05)
  • Why longtermism is hard (01:05:49)


 Want to get free one-on-one advice from our team? We're here to help.

 We’ve helped thousands of people formulate their plans and put them in touch with mentors.

 We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. If you're a regular listener to the show we're especially likely to want to speak with you.

Learn about and apply for advising.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
 Audio mastering: Ben Cordell
 Transcriptions: Katy Moore

31 Dec 20232023 Mega-highlights Extravaganza01:53:43

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came out in 2023

That's 32 of our favourite ideas packed into one episode that's so bursting with substance it might be more than the human mind can safely handle.

There's something for everyone here:

  • Ezra Klein on punctuated equilibrium
  • Tom Davidson on why AI takeoff might be shockingly fast
  • Johannes Ackva on political action versus lifestyle changes
  • Hannah Ritchie on how buying environmentally friendly technology helps low-income countries 
  • Bryan Caplan on rational irrationality on the part of voters
  • Jan Leike on whether the release of ChatGPT increased or reduced AI extinction risks
  • Athena Aktipis on why elephants get deadly cancers less often than humans
  • Anders Sandberg on the lifespan of civilisations
  • Nita Farahany on hacking neural interfaces

...plus another 23 such gems.

And they're in an order that our audio engineer Simon Monsour described as having an "eight-dimensional-tetris-like rationale."

I don't know what the hell that means either, but I'm curious to find out.

And remember: if you like these highlights, note that we release 20-minute highlights reels for every new episode over on our sister feed, which is called 80k After Hours. So even if you're struggling to make time to listen to every single one, you can always get some of the best bits of our episodes.

We hope for all the best things to happen for you in 2024, and we'll be back with a traditional classic episode soon.

This Mega-highlights Extravaganza was brought to you by Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong

08 Jan 2020#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war01:25:11
Rebroadcast: this episode was originally released in May 2018.

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded?

According to Bryan Caplan in episode #32, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees.

Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner.

Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University.

Full transcript of the conversation, summary, and links to learn more.

The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions.

Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford.

His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base.

Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including:

• Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done?
• How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened?
• If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians?
• What long-shot drugs can people take in their 70s to stave off death?
• Can science extend human (waking) life by cutting our need to sleep?
• How bad would it be if a solar flare took down the electricity grid? Could it happen?
• If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it?
• Will lifelike robots make us more inclined to dehumanise one another?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

12 May 2020Article: Ways people trying to do good accidentally make things worse, and how to avoid them00:26:46
Today’s release is the second experiment in making audio versions of our articles.

The first was a narration of Greg Lewis’ terrific problem profile on ‘Reducing global catastrophic biological risks’, which you can find on the podcast feed just before episode #74 - that is, our interview with Greg about the piece.

If you want to check out the links in today’s article, you can find those here.

And if you have feedback on these, positive or negative, it’d be great if you could email us at podcast@80000hours.org. 

13 Jan 2021Rob Wiblin on self-improvement and research ethics02:30:37

This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin.

Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own.

Among other things they cover:

• Is trying to become a better person a good strategy for self-improvement
• Why Rob thinks many people could achieve much more by finding themselves a line manager
• Why interviews on this show are so damn long
• Is it complicated to figure out what human beings value, or actually simpler than it seems
• Why Rob thinks research ethics and institutional review boards are causing immense harm
• Where prediction markets might be failing today and how to tell

If you like this go ahead and subscribe to Spencer's show by searching for Clearer Thinking in your podcasting app.

In particular, you might want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler.

The 80,000 Hours Podcast is produced by Keiran Harris.

08 Jan 2024#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications03:50:30

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.

But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.

According to Carl Shulman, research associate at Oxford University’s Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.

Rebroadcast: this episode was originally released in October 2021.

Links to learn more, summary, and full transcript.

The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:

  • The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.
  • So saving all US citizens at any given point in time would be worth $1,300 trillion.
  • If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.
  • Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.

This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.

If the case is clear enough, why hasn’t it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?

Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.

Carl suspects another reason is that it’s difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn’t know what good performance looks like, politicians can’t be given incentives to do the right thing.

It’s reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.

But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we’ve still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.

Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we’ve never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.

Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:

  • A few reasons Carl isn’t excited by ‘strong longtermism’
  • How x-risk reduction compares to GiveWell recommendations
  • Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change
  • The history of bioweapons
  • Whether gain-of-function research is justifiable
  • Successes and failures around COVID-19
  • The history of existential risk
  • And much more

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

15 Aug 2022#136 – Will MacAskill on what we owe the future02:54:37
  1. People who exist in the future deserve some degree of moral consideration.
  2. The future could be very big, very long, and/or very good.
  3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
  4. So trying to make the world better for future generations is a key priority of our time.

This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

Links to learn more, summary and full transcript.

From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed.

A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it.

This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations.

The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back.

But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as:

• How Will was eventually won over to longtermism
• The three best lines of argument against longtermism
• How to avoid moral fanaticism
• Which technologies or events are most likely to have permanent effects
• What 'longtermists' do today in practice
• How to predict the long-term effect of our actions
• Whether the future is likely to be good or bad
• Concrete ideas to make the future better
• What Will donates his money to personally
• Potatoes and megafauna
• And plenty more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:36)
  • What longtermism actually is (00:02:31)
  • The case for longtermism (00:04:30)
  • What longtermists are actually doing (00:15:54)
  • Will’s personal journey (00:22:15)
  • Strongest arguments against longtermism (00:42:28)
  • Preventing extinction vs. improving the quality of the future (00:59:29)
  • Is humanity likely to converge on doing the same thing regardless? (01:06:58)
  • Lock-in scenario vs. long reflection (01:27:11)
  • Is the future good in expectation? (01:32:29)
  • Can we actually predictably influence the future positively? (01:47:27)
  • Tiny probabilities of enormous value (01:53:40)
  • Stagnation (02:19:04)
  • Concrete suggestions (02:34:27)
  • Where Will donates (02:39:40)
  • Potatoes and megafauna (02:41:48)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

24 Jul 2023#157 – Ezra Klein on existential risk from AI and what DC could do about it01:18:46

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

  • Whether it's desirable to slow down AI research
  • The value of engaging with current policy debates even if they don't seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

29 Oct 2020How much does a vote matter? (Article)00:31:14

Today’s release is the latest in our series of audio versions of our articles.

In this one — How much does a vote matter? — I investigate the two key things that determine the impact of your vote:

• The chances of your vote changing an election’s outcome
• How much better some candidates are for the world as a whole, compared to others

I then discuss what I think are the best arguments against voting in important elections:

• If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake.
• While voting itself doesn’t take long, knowing enough to accurately pick which candidate is better for the world actually does take substantial effort — effort that could be better allocated elsewhere.

Finally, I look into the impact of donating to campaigns or working to ‘get out the vote’, which can be effective ways to generate additional votes for your preferred candidate.

If you want to check out the links, footnotes and figures in today’s article, you can find those here.

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.

12 Feb 2025Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)00:57:29

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission.

For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight.

As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice.

Links to learn more, highlights, video, and full transcript.

The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal.

OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs.

The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal.

But as Rose and Rob lay out, it’s not clear any of those things is actually true.

In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover:

  • Why OpenAI wants to change its charitable purpose and whether that’s legally permissible
  • On what basis the attorneys general will decide OpenAI’s fate
  • The challenges in valuing the nonprofit’s “priceless” position of control
  • Whether Musk’s offer will force OpenAI to up their own bid, and whether they could raise the money
  • If other tech giants might now jump in with competing offers
  • How politics could influence the attorneys general reviewing the deal
  • What Rose thinks should actually happen to protect the public interest

Chapters:

  • Cold open (00:00:00)
  • Elon throws a $97.4b bomb (00:01:18)
  • What was craziest in OpenAI’s plan to break free of the nonprofit (00:02:24)
  • Can OpenAI suddenly change its charitable purpose like that? (00:05:19)
  • Diving into Elon’s big announcement (00:15:16)
  • Ways OpenAI could try to reject the offer (00:27:21)
  • Sam Altman slips up (00:35:26)
  • Will this actually stop things? (00:38:03)
  • Why does OpenAI even want to change its charitable mission? (00:42:46)
  • Most likely outcomes and what Rose thinks should happen (00:51:17)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore

11 Mar 2025#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared03:57:36

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.

That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.

Links to learn more, highlights, video, and full transcript.

The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.

Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.

What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.

In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:

  • Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought
  • The three different types of intelligence explosions that occur in order
  • Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights
  • How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity
  • Ways AI could radically improve human coordination and decision making
  • Why we should aim for truly flourishing futures, not just avoiding extinction

Chapters:

  • Cold open (00:00:00)
  • Who’s Will MacAskill? (00:00:46)
  • Why Will now just works on AGI (00:01:02)
  • Will was wrong(ish) on AI timelines and hinge of history (00:04:10)
  • A century of history crammed into a decade (00:09:00)
  • Science goes super fast; our institutions don't keep up (00:15:42)
  • Is it good or bad for intellectual progress to 10x? (00:21:03)
  • An intelligence explosion is not just plausible but likely (00:22:54)
  • Intellectual advances outside technology are similarly important (00:28:57)
  • Counterarguments to intelligence explosion (00:31:31)
  • The three types of intelligence explosion (software, technological, industrial) (00:37:29)
  • The industrial intelligence explosion is the most certain and enduring (00:40:23)
  • Is a 100x or 1,000x speedup more likely than 10x? (00:51:51)
  • The grand superintelligence challenges (00:55:37)
  • Grand challenge #1: Many new destructive technologies (00:59:17)
  • Grand challenge #2: Seizure of power by a small group (01:06:45)
  • Is global lock-in really plausible? (01:08:37)
  • Grand challenge #3: Space governance (01:18:53)
  • Is space truly defence-dominant? (01:28:43)
  • Grand challenge #4: Morally integrating with digital beings (01:32:20)
  • Will we ever know if digital minds are happy? (01:41:01)
  • “My worry isn't that we won't know; it's that we won't care” (01:46:31)
  • Can we get AGI to solve all these issues as early as possible? (01:49:40)
  • Politicians have to learn to use AI advisors (02:02:03)
  • Ensuring AI makes us smarter decision-makers (02:06:10)
  • How listeners can speed up AI epistemic tools (02:09:38)
  • AI could become great at forecasting (02:13:09)
  • How not to lock in a bad future (02:14:37)
  • AI takeover might happen anyway — should we rush to load in our values? (02:25:29)
  • ML researchers are feverishly working to destroy their own power (02:34:37)
  • We should aim for more than mere survival (02:37:54)
  • By default the future is rubbish (02:49:04)
  • No easy utopia (02:56:55)
  • What levers matter most to utopia (03:06:32)
  • Bottom lines from the modelling (03:20:09)
  • People distrust utopianism; should they distrust this? (03:24:09)
  • What conditions make eventual eutopia likely? (03:28:49)
  • The new Forethought Centre for AI Strategy (03:37:21)
  • How does Will resist hopelessness? (03:50:13)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

17 Jul 2019#61 - Helen Toner on emerging technology, national security, and China01:54:57

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.

Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century.

In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances.

How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace.

Links to learn more, summary and full transcript
Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and Will MacAskill
The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman
AI strategy and governance roles on the job board

Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed.

Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day.

Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.

But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?

In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China.

We cover:

• Why immigration is the main policy area that should be affected by AI advances today.
• Why talking about an 'arms race' in AI is premature.
• How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
• Whether it's possible to become a China expert and still get a security clearance.
• Can access to ML algorithms be restricted, or is that just not practical?
• Whether AI could help stabilise authoritarian regimes.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

19 Nov 2021#116 – Luisa Rodriguez on why global catastrophes seem unlikely to kill us all03:45:44

If modern human civilisation collapsed — as a result of nuclear war, severe climate change, or a much worse pandemic than COVID-19 — billions of people might die.

That's terrible enough to contemplate. But what’s the probability that rather than recover, the survivors would falter and humanity would actually disappear for good?

It's an obvious enough question, but very few people have spent serious time looking into it -- possibly because it cuts across history, economics, and biology, among many other fields. There's no Disaster Apocalypse Studies department at any university, and governments have little incentive to plan for a future in which their country probably no longer even exists.

The person who may have spent the most time looking at this specific question is Luisa Rodriguez — who has conducted research at Rethink Priorities, Oxford University's Future of Humanity Institute, the Forethought Foundation, and now here, at 80,000 Hours.

Links to learn more, summary and full transcript.

She wrote a series of articles earnestly trying to foresee how likely humanity would be to recover and build back after a full-on civilisational collapse.

There are a couple of main stories people put forward for how a catastrophe like this would kill every single human on Earth — but Luisa doesn’t buy them.

Story 1: Nuclear war has led to nuclear winter. There's a 10-year period during which a lot of the world is really inhospitable to agriculture. The survivors just aren't able to figure out how to feed themselves in the time period, so everyone dies of starvation or cold.

Why Luisa doesn’t buy it:

Catastrophes will almost inevitably be non-uniform in their effects. If 80,000 people survive, they’re not all going to be in the same city — it would look more like groups of 5,000 in a bunch of different places.

People in some places will starve, but those in other places, such as New Zealand, will be able to fish, eat seaweed, grow potatoes, and find other sources of calories.

It’d be an incredibly unlucky coincidence if the survivors of a nuclear war -- likely spread out all over the world -- happened to all be affected by natural disasters or were all prohibitively far away from areas suitable for agriculture (which aren’t the same areas you’d expect to be attacked in a nuclear war).

Story 2: The catastrophe leads to hoarding and violence, and in addition to people being directly killed by the conflict, it distracts everyone so much from the key challenge of reestablishing agriculture that they simply fail. By the time they come to their senses, it’s too late -- they’ve used up too much of the resources they’d need to get agriculture going again.

Why Luisa doesn’t buy it:

We‘ve had lots of resource scarcity throughout history, and while we’ve seen examples of conflict petering out because basic needs aren’t being met, we’ve never seen the reverse.

And again, even if this happens in some places -- even if some groups fought each other until they literally ended up starving to death — it would be completely bizarre for it to happen to every group in the world. You just need one group of around 300 people to survive for them to be able to rebuild the species.

In this wide-ranging and free-flowing conversation, Luisa and Rob also cover:

• What the world might actually look like after one of these catastrophes
• The most valuable knowledge for survivors
• How fast populations could rebound
• ‘Boom and bust’ climate change scenarios
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:37)
  • Recovering from a serious collapse of civilization (00:11:41)
  • Existing literature (00:14:52)
  • Fiction (00:20:42)
  • Types of disasters (00:23:13)
  • What the world might look like after a catastrophe (00:29:09)
  • Nuclear winter (00:34:34)
  • Stuff that might stick around (00:38:58)
  • Grace period (00:42:39)
  • Examples of human ingenuity in tough situations (00:48:33)
  • The most valuable knowledge for survivors (00:57:23)
  • Would people really work together? (01:09:00)
  • Radiation (01:27:08)
  • Learning from the worst pandemics (01:31:40)
  • Learning from fallen civilizations (01:36:30)
  • Direct extinction (01:45:30)
  • Indirect extinction (02:01:53)
  • Rapid recovery vs. slow recovery (02:05:01)
  • Risk of culture shifting against science and tech (02:15:33)
  • Resource scarcity (02:23:07)
  • How fast could populations rebound (02:37:07)
  • Implications for what we ought to do right now (02:43:52)
  • How this work affected Luisa’s views (02:54:00)
  • Boom and bust climate change scenarios (02:57:06)
  • Stagnation and cold wars (03:01:18)
  • How Luisa met her biological father (03:18:23)
  • If Luisa had to change careers (03:40:38)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

04 Jan 2024#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms03:22:17

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.

The resulting oil spills damage the environment and cause severe health problems, but the Nigerian government has continually failed in their attempts to stop this theft.

They send in the army, and the army gets corrupted. They send in enforcement agencies, and the enforcement agencies get corrupted. What’s happening here?

According to Mushtaq Khan, economics professor at SOAS University of London, this is a classic example of ‘networked corruption’. Everyone in the community is benefiting from the criminal enterprise — so much so that the locals would prefer civil war to following the law. It pays vastly better than other local jobs, hotels and restaurants have formed around it, and houses are even powered by the electricity generated from the oil.

Rebroadcast: this episode was originally released in September 2021.

Links to learn more, summary, and full transcript.

In today’s episode, Mushtaq elaborates on the models he uses to understand these problems and make predictions he can test in the real world.

Some of the most important factors shaping the fate of nations are their structures of power: who is powerful, how they are organized, which interest groups can pull in favours with the government, and the constant push and pull between the country’s rulers and its ruled. While traditional economic theory has relatively little to say about these topics, institutional economists like Mushtaq have a lot to say, and participate in lively debates about which of their competing ideas best explain the world around us.

The issues at stake are nothing less than why some countries are rich and others are poor, why some countries are mostly law abiding while others are not, and why some government programmes improve public welfare while others just enrich the well connected.

Mushtaq’s specialties are anti-corruption and industrial policy, where he believes mainstream theory and practice are largely misguided. To root out fraud, aid agencies try to impose institutions and laws that work in countries like the U.K. today. Everyone nods their heads and appears to go along, but years later they find nothing has changed, or worse — the new anti-corruption laws are mostly just used to persecute anyone who challenges the country’s rulers.

As Mushtaq explains, to people who specialise in understanding why corruption is ubiquitous in some countries but not others, this is entirely predictable. Western agencies imagine a situation where most people are law abiding, but a handful of selfish fat cats are engaging in large-scale graft. In fact in the countries they’re trying to change everyone is breaking some rule or other, or participating in so-called ‘corruption’, because it’s the only way to get things done and always has been.

Mushtaq’s rule of thumb is that when the locals most concerned with a specific issue are invested in preserving a status quo they’re participating in, they almost always win out.

To actually reduce corruption, countries like his native Bangladesh have to follow the same gradual path the U.K. once did: find organizations that benefit from rule-abiding behaviour and are selfishly motivated to promote it, and help them police their peers.

Trying to impose a new way of doing things from the top down wasn’t how Europe modernised, and it won’t work elsewhere either.

In cases like oil theft in Nigeria, where no one wants to follow the rules, Mushtaq says corruption may be impossible to solve directly. Instead you have to play a long game, bringing in other employment opportunities, improving health services, and deploying alternative forms of energy — in the hope that one day this will give people a viable alternative to corruption.

In this extensive interview Rob and Mushtaq cover this and much more, including:

  • How does one test theories like this?
  • Why are companies in some poor countries so much less productive than their peers in rich countries?
  • Have rich countries just legalized the corruption in their societies?
  • What are the big live debates in institutional economics?
  • Should poor countries protect their industries from foreign competition?
  • Where has industrial policy worked, and why?
  • How can listeners use these theories to predict which policies will work in their own countries?

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

14 May 2024#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"03:06:47

"Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right?

"And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity." — Zach Weinersmith

In today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?

Links to learn more, highlights, and full transcript.

They cover:

  • Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement.
  • What Zach thinks are the best and worst arguments for settling space.
  • Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist).
  • How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement.
  • A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates.
  • Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe.
  • How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing.
  • The current state of space law and how it might set us up for international conflict.
  • How space cannibalism legal loopholes might work on the International Space Station.
  • And much more.

Chapters:

  • Space optimism and space bastards (00:03:04)
  • Bad arguments for why we should settle space (00:14:01)
  • Superficially plausible arguments for why we should settle space (00:28:54)
  • Is settling space even biologically feasible? (00:32:43)
  • Sex, pregnancy, and child development in space (00:41:41)
  • Where’s the best space place to settle? (00:55:02)
  • Creating self-sustaining habitats (01:15:32)
  • What about AI advances? (01:26:23)
  • A roadmap for settling space (01:33:45)
  • Space law (01:37:22)
  • Space signalling and propaganda (01:51:28) 
  • Space war (02:00:40)
  • Mining asteroids (02:06:29)
  • Company towns and communes in space (02:10:55)
  • Sending digital minds into space (02:26:37)
  • The most promising space governance models (02:29:07)
  • The tragedy of the commons (02:35:02)
  • The tampon bandolier and other bodily functions in space (02:40:14)
  • Is space cannibalism legal? (02:47:09)
  • The pregnadrome and other bizarre proposals (02:50:02)
  • Space sexism (02:58:38)
  • What excites Zach about the future (03:02:57)

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

15 Jan 2025#134 Classic episode – Ian Morris on what big-picture history teaches us03:40:53

Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.

Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.

Why such big systematic changes — and why these changes specifically?

That's the question bestselling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.

Rebroadcast: this episode was originally released in July 2022.

Links to learn more, highlights, and full transcript.

There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?

In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.

On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.

There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.

Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.

In this classic episode, we discuss all of Ian's major books.

Chapters:

  • Rob's intro (00:00:53)
  • The interview begins (00:02:30)
  • Geography is Destiny (00:03:38)
  • Why the West Rules—For Now (00:12:04)
  • War! What is it Good For? (00:28:19)
  • Expectations for the future (00:40:22)
  • Foragers, Farmers, and Fossil Fuels (00:53:53)
  • Historical methodology (01:03:14)
  • Falsifiable alternative theories (01:15:59)
  • Archaeology (01:22:56)
  • Energy extraction technology as a key driver of human values (01:37:43)
  • Allowing people to debate about values (02:00:16)
  • Can productive wars still occur? (02:13:28)
  • Where is history contingent and where isn’t it? (02:30:23)
  • How Ian thinks about the future (03:13:33)
  • Macrohistory myths (03:29:51)
  • Ian’s favourite archaeology memory (03:33:19)
  • The most unfair criticism Ian’s ever received (03:35:17)
  • Rob's outro (03:39:55)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

16 Sep 2019Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.00:03:39

1. Fill out our annual impact survey here

2. Find a great vacancy on our job board

3. Learn about our key ideas, and get links to our top articles

4. Join our newsletter for an email about what's new, every 2 weeks or so. 

5. Or follow our pages on Facebook and Twitter

—— 

Once a year 80,000 Hours runs a survey to find out whether we've helped our users have a larger social impact with their life and career. 

We and our donors need to know whether our services, like this podcast, are helping people enough to continue them or scale them up, and it's only by hearing from you that we can make these decisions in a sensible way. 

So, if 80,000 Hours' podcast, job board, articles, headhunting, advising or other projects have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how. 

You can also let us know where we've fallen short, which helps us fix problems with what we're doing. 

We've refreshed the survey this year, hopefully making it easier to fill out than in the past. 

We'll keep this appeal up for about two weeks, but if you fill it out now that means you definitely won't forget! 

Thanks so much, and talk to you again in a normal episode soon. 

— Rob

Tag for internal use: this RSS feed is originating in BackTracks.

08 Aug 2022#135 – Samuel Charap on key lessons from five months of war in Ukraine00:54:47

After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we're in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over.

 So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

Links to learn more, summary and full transcript.

As Sam lays out, Russia controls much of Ukraine's east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter.

Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict.

In today's brisk conversation, Rob and Sam cover the following topics:

• Current territorial control and the level of attrition within Russia’s and Ukraine's military forces.
• Russia's current goals.
• Whether Sam's views have changed since March on topics like: Putin's motivations, the wisdom of Ukraine's strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends.
• Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion.
• Whether there's anything to learn from many of our worst fears -- such as the use of bioweapons on civilians -- not coming to pass.
• What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires).
• Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it's still a long shot.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:31)
  • The state of play in Ukraine (00:03:05)
  • How things have changed since March (00:12:59)
  • Has Russia learned from its mistakes? (00:23:40)
  • Broader lessons (00:28:44)
  • A possible way out (00:37:15)

 Producer: Keiran Harris
 Audio mastering: Ben Cordell and Ryan Kessler
 Transcriptions: Katy Moore

04 Feb 2025If digital minds could suffer, how would we ever know? (Article)01:14:30

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.

Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.

But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:

  • We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.
  • It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.

And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.

We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.

This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.

You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.

Chapters:

  • Introduction (00:00:00)
  • Understanding the moral status of digital minds (00:00:58)
  • Summary (00:03:31)
  • Our overall view (00:04:22)
  • Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)
  • Clearing up common misconceptions (00:12:16)
  • Creating digital minds could go very badly - or very well (00:14:13)
  • Dangers for digital minds (00:14:41)
  • Dangers for humans (00:16:13)
  • Other dangers (00:17:42)
  • Things could also go well (00:18:32)
  • We don't know how to assess the moral status of AI systems (00:19:49)
  • There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)
  • Many plausible theories of consciousness could include digital minds (00:24:16)
  • The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)
  • We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)
  • The scale of this issue might be enormous (00:36:08)
  • Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)
  • Summing up so far (00:52:22)
  • Arguments against the moral status of digital minds as a pressing problem (00:53:25)
  • Two key cruxes (00:53:31)
  • Maybe this problem is intractable (00:54:16)
  • Maybe this issue will be solved by default (00:58:19)
  • Isn't risk from AI more important than the risks to AIs? (01:00:45)
  • Maybe current AI progress will stall (01:02:36)
  • Isn't this just too crazy? (01:03:54)
  • What can you do to help? (01:05:10)
  • Important considerations if you work on this problem (01:13:00)
11 Apr 2025Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys01:47:10

"We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that’s where we run into trouble." —Hannah Boettcher

What happens when your desire to do good starts to undermine your own wellbeing?

Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.

Check out the full transcript and links to learn more: https://80k.info/mh

If you’re dealing with your own mental health concerns, here are some resources that might help:

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:32)
  • 80,000 Hours’ former CEO Howie on what his anxiety and self-doubt feels like (00:03:47)
  • Evolutionary psychiatrist Randy Nesse on what emotions are for (00:07:35)
  • Therapist Hannah Boettcher on how striving for impact can affect our self-worth (00:13:45)
  • Luisa Rodriguez on grieving the gap between who you are and who you wish you were (00:16:57)
  • Charity director Cameron Meyer Shorb on managing work-related guilt and shame (00:24:01)
  • Therapist Tim LeBon on aiming for excellence rather than perfection (00:29:18)
  • Author Cal Newport on making time to be alone with our thoughts (00:36:03)
  • 80,000 Hours career advisors Michelle Hutchinson and Habiba Islam on prioritising mental health over career impact (00:40:28)
  • Charity founder Sarah Eustis-Guthrie on the ups and downs of founding an organisation (00:45:52)
  • Our World in Data researcher Hannah Ritchie on feeling like an imposter as a generalist (00:51:28)
  • Moral philosopher Will MacAskill on being proactive about mental health and preventing burnout (01:00:46)
  • Grantmaker Ajeya Cotra on the psychological toll of big open-ended research questions (01:11:00)
  • Researcher and grantmaker Christian Ruhl on how having a stutter affects him personally and professionally (01:19:30)
  • Mercy For Animals’ CEO Leah Garcés on insisting on self-care when doing difficult work (01:32:39)
  • 80,000 Hours’ former CEO Howie on balancing a job and mental illness (01:37:12)
  • Therapist Hannah Boettcher on how self-compassion isn’t self-indulgence (01:40:39)
  • Journalist Kelsey Piper on communicating about mental health in ways that resonate (01:43:32)
  • Luisa's outro (01:46:10)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Katy Moore and Milo McGuire
Transcriptions and web: Katy Moore

31 Dec 2019#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster01:52:39

Rebroadcast: this episode was originally released in January 2018.

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races.

Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?

Full transcript, key points & links to articles discussed in the show.

If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.

Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism (EA) community. In this interview we discuss a wide range of topics:

• How would we go about a ‘long reflection’ to fix our moral errors?
• Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’?
• If we basically solve existential risks, what does humanity do next?
• What are some of Will’s most unusual philosophical positions?
• What are the best arguments for and against utilitarianism?
• Given disagreements among philosophers, how much should we believe the findings of philosophy as a field?
• What are some the biases we should be aware of within academia?
• What are some of the downsides of becoming a professor?
• What are the merits of becoming a philosopher?
• How does the media image of EA differ to the actual goals of the community?
• What kinds of things would you like to see the EA community do differently?
• How much should we explore potentially controversial ideas?
• How focused should we be on diversity?
• What are the best arguments against effective altruism?

Get this episode by subscribing: type '80,000 Hours' into your podcasting app.

 The 80,000 Hours Podcast is produced by Keiran Harris.

Améliorez votre compréhension de 80,000 Hours Podcast avec My Podcast Data

Chez My Podcast Data, nous nous efforçons de fournir des analyses approfondies et basées sur des données tangibles. Que vous soyez auditeur passionné, créateur de podcast ou un annonceur, les statistiques et analyses détaillées que nous proposons peuvent vous aider à mieux comprendre les performances et les tendances de 80,000 Hours Podcast. De la fréquence des épisodes aux liens partagés en passant par la santé des flux RSS, notre objectif est de vous fournir les connaissances dont vous avez besoin pour vous tenir à jour. Explorez plus d'émissions et découvrez les données qui font avancer l'industrie du podcast.
© My Podcast Data