Beta

Explorez tous les épisodes de Data Skeptic

Plongez dans la liste complète des épisodes de Data Skeptic. Chaque épisode est catalogué accompagné de descriptions détaillées, ce qui facilite la recherche et l'exploration de sujets spécifiques. Suivez tous les épisodes de votre podcast préféré et ne manquez aucun contenu pertinent.

Rows per page:

1–50 of 570

DateTitreDurée
15 Jun 2019Under Resourced Languages00:16:47

Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English.  Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models.  For languages that researchers have not paid as much attention to, these tools are not always available.

21 Sep 2020Crowdsourced Expertise00:27:50
19 Aug 2016Trusting Machine Learning Models with LIME00:35:16

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

12 Dec 2023LLMs for Data Analysis00:29:00

In this episode, we are joined by Amir Netz, a Technical Fellow at Microsoft and the CTO of Microsoft Fabric. He discusses how companies can use Microsoft's latest tools for business intelligence.

Amir started by discussing how business intelligence has progressed in relevance over the years. Amir gave a brief introduction into what Power BI and Fabric are. He also discussed how Fabric distinguishes from other BI tools by building an end-to-end tool for the data journey.

Amir spoke about the process of building and deploying machine learning models with Microsoft Fabric. He shared the difference between Software as a Service (SaaS) and Platform as a Service (PaaS).

Amir discussed the benefits of Fabric's auto-integration and auto-optimization abilities. He also discussed the capabilities of Copilot in Fabric. He also discussed exciting future developments planned for Fabric. Amir shared techniques for limiting Copilot hallucination.

17 Oct 2021Fast and Frugal Time Series Forecasting00:37:30

Fotios Petropoulos, Professor of Management Science at the University of Bath in The U.K., joins us today to talk about his work "Fast and Frugal Time Series Forecasting."

24 Aug 2020False Consensus00:33:06

Sami Yousif joins us to discuss the paper The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. This work empirically explores how individuals evaluate consensus under different experimental conditions reviewing online news articles.

More from Sami at samiyousif.org

Link to survey mentioned by Daniel Kerrigan: https://forms.gle/TCdGem3WTUYEP31B8

15 Mar 2021Benchmarking Vision on Edge vs Cloud00:47:53

Karthick Shankar, Masters Student at Carnegie Mellon University, and Somali Chaterji, Assistant Professor at Purdue University, join us today to discuss the paper "JANUS: Benchmarking Commercial and Open-Source Cloud and Edge Platforms for Object and Anomaly Detection Workloads"

Works Mentioned:

https://ieeexplore.ieee.org/abstract/document/9284314
“JANUS: Benchmarking Commercial and Open-Source Cloud and Edge Platforms for Object and Anomaly Detection Workloads.”

by: Karthick Shankar, Pengcheng Wang, Ran Xu, Ashraf Mahgoub, Somali ChaterjiSocial Media

Karthick Shankar
https://twitter.com/karthick_sh

Somali Chaterji
https://twitter.com/somalichaterji?lang=en
https://schaterji.io/

06 Mar 2023Bot Detection and Dyadic Surveys00:35:24

The use of social bots to fill out online surveys is becoming prevalent. Today, we speak with Sara Bybee, a postdoctoral research scholar at the University of Utah. Sara shares from her research, how she detected social bots, the strategies to curb them, and how underrepresented groups can be more represented in surveys.

29 Sep 2017Generative AI for Content Creation00:34:33

Last year, the film development and production company End Cue produced a short film, called Sunspring, that was entirely written by an artificial intelligence using neural networks. More specifically, it was authored by a recurrent neural network (RNN) called long short-term memory (LSTM). According to End Cue’s Chief Technical Officer, Deb Ray, the company has come a long way in improving the generative AI aspect of the bot. In this episode, Deb Ray joins host Kyle Polich to discuss how generative AI models are being applied in creative processes, such as screenwriting. Their discussion also explores how data science for analyzing development projects, such as financing and selecting scripts, as well as optimizing the content production process.

20 Feb 2015[MINI] k-means clustering00:14:20

The k-means clustering algorithm is an algorithm that computes a deterministic label for a given "k" number of clusters from an n-dimensional datset.  This mini-episode explores how Yoshi, our lilac crowned amazon's biological processes might be a useful way of measuring where she sits when there are no humans around.  Listen to find out how!

13 Feb 2015Shadow Profiles on Social Networks00:38:37

Emre Sarigol joins me this week to discuss his paper Online Privacy as a Collective Phenomenon. This paper studies data collected from social networks and how the sharing behaviors of individuals can unintentionally reveal private information about other people, including those that have not even joined the social network! For the specific test discussed, the researchers were able to accurately predict the sexual orientation of individuals, even when this information was withheld during the training of their algorithm.

The research produces a surprisingly accurate predictor of this private piece of information, and was constructed only with publically available data from myspace.com found on archive.org. As Emre points out, this is a small shadow of the potential information available to modern social networks. For example, users that install the Facebook app on their mobile phones are (perhaps unknowningly) sharing all their phone contacts. Should a social network like Facebook choose to do so, this information could be aggregated to assemble "shadow profiles" containing rich data on users who may not even have an account.

22 Jan 2021Gerrymandering00:34:09

Brian Brubach, Assistant Professor in the Computer Science Department at Wellesley College, joins us today to discuss his work “Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives".

WORKS MENTIONED:
Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives
by Brian Brubach, Aravind Srinivasan, and Shawn Zhao

17 Mar 2025Criminal Networks00:43:35

In this episode we talk with Justin Wang Ngai Yeung, a PhD candidate at the Network Science Institute at Northeastern University in London, who explores how network science helps uncover criminal networks.

Justin is also a member of the organizing committee of the satellite conference dealing with criminal networks at the network science conference in The Netherlands in June 2025.

Listeners will learn how graph-based models assist law enforcement in analyzing missing data, identifying key figures in criminal organizations, and improving intervention strategies.

Key insights include the challenges of incomplete and inaccurate data in criminal network analysis, how law enforcement agencies use network dismantling techniques to disrupt organized crime, and the role of machine learning in predicting hidden connections within illicit networks.

 

-------------------------------

Want to listen ad-free?  Try our Graphs Course?  Join Data Skeptic+ for $5 / month of $50 / year

https://plus.dataskeptic.com

27 Jul 2018Spam Filtering with Naive Bayes00:19:45

Today's spam filters are advanced data driven tools. They rely on a variety of techniques to effectively and often seamlessly filter out junk email from good email.

Whitelists, blacklists, traffic analysis, network analysis, and a variety of other tools are probably employed by most major players in this area. Naturally content analysis can be an especially powerful tool for detecting spam.

Given the binary nature of the problem (Spam or \neg Spam) its clear that this is a great problem to use machine learning to solve. In order to apply machine learning, you first need a labelled training set. Thankfully, many standard corpora of labelled spam data are readily available. Further, if you're working for a company with a spam filtering problem, often asking users to self-moderate or flag things as spam can be an effective way to generate a large amount of labels for "free".

With a labeled dataset in hand, a data scientist working on spam filtering must next do feature engineering. This should be done with consideration of the algorithm that will be used. The Naive Bayesian Classifer has been a popular choice for detecting spam because it tends to perform pretty well on high dimensional data, unlike a lot of other ML algorithms. It also is very efficient to compute, making it possible to train a per-user Classifier if one wished to. While we might do some basic NLP tricks, for the most part, we can turn each word in a document (or perhaps each bigram or n-gram in a document) into a feature.

The Naive part of the Naive Bayesian Classifier stems from the naive assumption that all features in one's analysis are considered to be independent. If x and y are known to be independent, then Pr(x \cap y) = Pr(x) \cdot Pr(y). In other words, you just multiply the probabilities together. Shh, don't tell anyone, but this assumption is actually wrong! Certainly, if a document contains the word algorithm, it's more likely to contain the word probability than some randomly selected document. Thus, Pr(\text{algorithm} \cap \text{probability}) > Pr(\text{algorithm}) \cdot Pr(\text{probability}), violating the assumption. Despite this "flaw", the Naive Bayesian Classifier works remarkably will on many problems. If one employs the common approach of converting a document into bigrams (pairs of words instead of single words), then you can capture a good deal of this correlation indirectly.

In the final leg of the discussion, we explore the question of whether or not a Naive Bayesian Classifier would be a good choice for detecting fake news.

 
11 Jun 2021Detecting Drift00:27:19

Sam Ackerman, Research Data Scientist at IBM Research Labs in Haifa, Israel, joins us today to talk about his work Detection of Data Drift and Outliers Affecting Machine Learning Model Performance Over Time.

Check out Sam's IBM statistics/ML blog at: http://www.research.ibm.com/haifa/dept/vst/ML-QA.shtml
 
07 Jul 2014The Right (big data) Tool for the Job with Jay Shankar00:49:59

In this week's episode, we discuss applied solutions to big data problem with big data engineer Jay Shankar.  The episode explores approaches and design philosophy to solving real world big data business problems, and the exploration of the wide array of tools available.

 

22 May 2015Detecting Cheating in Chess00:44:35

With the advent of algorithms capable of beating highly ranked chess players, the temptation to cheat has emmerged as a potential threat to the integrity of this ancient and complex game. Yet, there are aspects of computer play that are measurably different than human play. Dr. Kenneth Regan has developed a methodology for looking at a long series of modes and measuring the likelihood that the moves may have been selected by an algorithm.

The full transcript of this episode is well annotated and has a wealth of excellent links to the things discussed.

If you're interested in learning more about Dr. Regan, his homepage (Kenneth Regan), his page on wikispaces, and the amazon page of books by Kenneth W. Regan are all great resources.

07 Sep 2018Quality Score00:18:55

Two weeks ago we discussed click through rates or CTRs and their usefulness and limits as a metric. Today, we discuss a related metric known as quality score.

While that phrase has probably been used to mean dozens of different things in different contexts, our discussion focuses around the idea of quality score encountered in Search Engine Marketing (SEM). SEM is the practice of purchasing keyword targeted ads shown to customers using a search engine.

Most SEM is managed via an auction mechanism - the advertiser states the price they are willing to pay, and in real time, the search engine will serve users advertisements and charge the advertiser.

But how to search engines decide who to show and what price to charge? This is a complicated question requiring a multi-part answer to address completely. In this episode, we focus on one part of that equation, which is the quality score the search engine assigns to the ad in context. This quality score is calculated via several factors including crawling the destination page (also called the landing page) and predicting how applicable the content found there is to the ad itself.

03 Aug 2018Human Detection of Fake News00:28:27

With publications such as "Prior exposure increases perceived accuracy of fake news", "Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning", and "The science of fake news", Gordon Pennycook is asking and answering analytical questions about the nature of human intuition and fake news.

Gordon appeared on Data Skeptic in 2016 to discuss people's ability to recognize pseudo-profound bullshit.  This episode explores his work in fake news.

10 Sep 2024Bird Distribution Modeling with Satbird00:39:31

This episode features an interview with Mélisande Teng, a PhD candidate at Université de Montréal. Her research lies in the intersection of remote sensing and computer vision for biodiversity monitoring.

27 May 2024Evaluating AI Abilities00:49:40

In this episode, Kozzy discusses his endeavors to compare the cognitive abilities of humans, animals, and AI programs. Specifically, we discussed object permanence, the ability to understand an object still exists in space even when you can’t see it. Our conversation traverses both philosophical and practical questions surrounding AI evaluation. We also learned about Animal AI 3, a gaming environment developed in Unity where AI programs and humans can go head-to-head to solve different problems in a gaming environment.

12 Dec 2014[MINI] The Battle of the Sexes00:18:04

Love and Data is the continued theme in this mini-episode as we discuss the game theory example of The Battle of the Sexes. In this textbook example, a couple must strategize about how to spend their Friday night. One partner prefers football games while the other partner prefers to attend the opera. Yet, each person would rather be at their non-preferred location so long as they are still with their spouse. So where should they decide to go?

23 Oct 2023arXiv Publication Patterns00:28:24

Today, we are joined by Rajiv Movva, a PhD student in Computer Science at Cornell Tech University. His research interest lies in the intersection of responsible AI and computational social science. He joins to discuss the findings of this work that analyzed LLM publication patterns.

He shared the dataset he used for the survey. He also discussed the conditions for determining the papers to analyze. Rajiv shared some of the trends he observed from his analysis. For one, he observed there has been an increase in LLMs research. He also shared the proportions of papers published by universities, organizations, and industry leaders in LLMs such as OpenAI and Google. He mentioned the majority of the papers are centered on the social impact of LLMs. He also discussed other exciting application of LLMs such as in education.

26 Dec 2022Crowdfunded Board Games00:34:31

It may be intuitive to think crowdfunding a project drives its innovation and novelty, but there are no empirical studies that prove this. On the show, Johannes Wachs shares his research that sought to determine whether crowdfunding truly drives innovation. He used board games as a case study and shared the results he found.

03 Jan 2022Open Telemetry00:36:18

John Watson, Principal Software Engineer at Splunk, joins us today to talk about Splunk and OpenTelemetry.

 

14 Mar 2022Breathing K-Means00:42:55

In this episode, we speak with Bernd Fritzke, a proficient financial expert and a Data Science researcher on his recent research - the breathing K-means algorithm. Bernd discussed the perks of the algorithms and what makes it stand out from other K-means variations. He extensively discussed the working principle of the algorithm and the subtle but impactful features that enables it produce top-notch results with low computational resources. Listen to learn about this algorithm.

30 May 2022School Reopening Analysis00:33:17

Carly Lupton-Smith joins us today to speak about her research which investigated the consistency between household and county measures of school reopening. Carly is a doctoral researcher in Biostatistics at Johns Hopkins Bloomberg School of Public Health. Listen to know about her findings.

Click here for additional show notes on our website!

Thanks to our sponsor!
ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.

Astera Centerprise is a no-code data integration platform that allows users to build ETL/ELT pipelines for modern data warehousing and analytics.

 

01 May 2015The Ghost in the MP300:35:22

Have you ever wondered what is lost when you compress a song into an MP3? This week's guest Ryan Maguire did more than that. He worked on software to issolate the sounds that are lost when you convert a lossless digital audio recording into a compressed MP3 file.

To complete his project, Ryan worked primarily in python using the pyo library as well as the Bregman Toolkit

Ryan mentioned humans having a dynamic range of hearing from 20 hz to 20,000 hz, if you'd like to hear those tones, check the previous link.

If you'd like to know more about our guest Ryan Maguire you can find his website at the previous link. To follow The Ghost in the MP3 project, please checkout their Facebook page, or on the sitetheghostinthemp3.com.

A PDF of Ryan's publication quality write up can be found at this link: The Ghost in the MP3 and it is definitely worth the read if you'd like to know more of the technical details.

13 Feb 2023A Survey of Data Science Methodologies00:24:58

On the show, Iñigo Martinez, a Ph.D. student at the University of Navarra shares his survey results which investigated how data practitioners perform data science projects. He revealed the methodologies typically used by data practitioners and the success factors in data science projects.

14 Apr 2017[MINI] GPU CPU00:11:03

There's more than one type of computer processor. The central processing unit (CPU) is typically what one means when they say "processor". GPUs were introduced to be highly optimized for doing floating point computations in parallel. These types of operations were very useful for high end video games, but as it turns out, those same processors are extremely useful for machine learning. In this mini-episode we discuss why.

01 Dec 2019Ancient Text Restoration00:41:13

Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.

20 Oct 2017The Complexity of Learning Neural Networks00:38:51

Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the many techniques that are central to the current ongoing big-data revolution is far from being sufficient for rigorous analysis, at best. In this episode of Data Skeptic, our host Kyle Polich welcomes guest John Wilmes, a mathematics post-doctoral researcher at Georgia Tech, to discuss the efficiency of neural network learning through complexity theory.

24 Apr 2020Plastic Bag Bans00:34:51

Becca Taylor joins us to discuss her work studying the impact of plastic bag bans as published in Bag Leakage: The Effect of Disposable Carryout Bag Regulations on Unregulated Bags from the Journal of Environmental Economics and Management. How does one measure the impact of these bans? Are they achieving their intended goals? Join us and find out!

21 Jul 2017[MINI] Conditional Independence00:14:43

In statistics, two random variables might depend on one another (for example, interest rates and new home purchases). We call this conditional dependence. An important related concept exists called conditional independence. This phrase describes situations in which two variables are independent of one another given some other variable.

For example, the probability that a vendor will pay their bill on time could depend on many factors such as the company's market cap. Thus, a statistical analysis would reveal many relationships between observable details about the company and their propensity for paying on time. However, if you know that the company has filed for bankruptcy, then we might assume their chances of paying on time have dropped to near 0, and the result is now independent of all other factors in light of this new information.

We discuss a few real world analogies to this idea in the context of some chance meetings on our recent trip to New York City.

08 Nov 2021Change Point Detection Algorithms00:30:49

Gerrit van den Burg, Postdoctoral Researcher at The Alan Turing Institute, joins us today to discuss his work "An Evaluation of Change Point Detection Algorithms."

10 Feb 2017[MINI] Primer on Deep Learning00:14:28

In this episode, we talk about a high-level description of deep learning.  Kyle presents a simple game (pictured below), which is more of a puzzle really, to try and give  Linh Da the basic concept.

 

 

Thanks to our sponsor for this week, the Data Science Association. Please check out their upcoming Dallas conference at dallasdatascience.eventbrite.com

22 Dec 2017Holiday reading 201700:12:38

We break format from our regular programming today and bring you an excerpt from Max Tegmark's book "Life 3.0".  The first chapter is a short story titled "The Tale of the Omega Team".  Audio excerpted courtesy of Penguin Random House Audio from LIFE 3.0 by Max Tegmark, narrated by Rob Shapiro.  You can find "Life 3.0" at your favorite bookstore and the audio edition via penguinrandomhouseaudio.com.

Kyle will be giving a talk at the Monterey County SkeptiCamp 2018.

17 Nov 2017P vs NP00:38:48

In this week's episode, host Kyle Polich interviews author Lance Fortnow about whether P will ever be equal to NP and solve all of life’s problems. Fortnow begins the discussion with the example question: Are there 100 people on Facebook who are all friends with each other? Even if you were an employee of Facebook and had access to all its data, answering this question naively would require checking more possibilities than any computer, now or in the future, could possibly do. The P/NP question asks whether there exists a more clever and faster algorithm that can answer this problem and others like it.

21 Jun 2019Facebook Bargaining Bots Invented a Language00:23:08

In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.

23 Jan 2023Conversational Surveys00:39:49

Traditional surveys have straight-jacket questions to be answered, thus restricting the information that can be gotten. Today, Ziang Xiao, a Postdoc Researcher in the FATE group at Microsoft Research Montréal, talks about conversational surveys, a type of survey that asks questions based on preceding answers. He discussed the benefits of conversational surveys and some of the challenges it poses.

27 Nov 2015[MINI] The Accuracy Paradox00:17:04

Today's episode discusses the accuracy paradox. There are cases when one might prefer a less accurate model because it yields more predictive power or better captures the underlying causal factors describing the outcome variable you are interested in. This is especially relevant in machine learning when trying to predict rare events. We discuss how the accuracy paradox might apply if you were trying to predict the likelihood a person was a bird owner.

20 Feb 2023Reproducible ESP Testing00:47:10

Our guest today is Zoltán Kekecs, a Ph.D. holder in Behavioural Science. Zoltán highlights the problem of low replicability in journal papers and illustrates how researchers can better ensure complete replication of their research and findings. He used Bem’s experiment as an example, extensively talking about his methodology and results.

11 Jan 2021Consecutive Votes in Paxos00:30:11

Eil Goldweber, a graduate student at the University of Michigan, comes on today to share his work in applying formal verification to systems and a modification to the Paxos protocol discussed in the paper Significance on Consecutive Ballots in Paxos.

Works Mentioned :
Previous Episode on Paxos 
https://dataskeptic.com/blog/episodes/2020/distributed-consensus

Paper:
On the Significance on Consecutive Ballots in Paxos by: Eli Goldweber, Nuda Zhang, and Manos Kapritsos

Thanks to our sponsor:
Nord VPN : 68% off a 2-year plan and one month free! With NordVPN, all the data you send and receive online travels through an encrypted tunnel. This way, no one can get their hands on your private information. Nord VPN is quick and easy to use to protect the privacy and security of your data. Check them out at nordvpn.com/dataskeptic

29 Oct 2024Graphs for HPC and LLMs00:52:08

We are joined by Maciej Besta, a senior researcher of sparse graph computations and large language models at the Scalable Parallel Computing Lab (SPCL). In this episode, we explore the intersection of graph theory and high-performance computing (HPC), Graph Neural Networks (GNNs) and LLMs.

13 Nov 2020Sybil Attacks on Federated Learning00:31:32

Clement Fung, a Societal Computing PhD student at Carnegie Mellon University, discusses his research in security of machine learning systems and a defense against targeted sybil-based poisoning called FoolsGold.

Works Mentioned:
The Limitations of Federated Learning in Sybil Settings

Twitter:

@clemfung

Website:
https://clementfung.github.io/

Thanks to our sponsors:

Brilliant - Online learning platform. Check out Geometry Fundamentals! Visit Brilliant.org/dataskeptic for 20% off Brilliant Premium!


BetterHelp - Convenient, professional, and affordable online counseling. Take 10% off your first month at betterhelp.com/dataskeptic

12 Feb 2016Scientific Studies of People's Relationship to Music00:42:14

Samuel Mehr joins us this week to share his perspective on why people are musical, where music comes from, and why it works the way it does. We discuss a number of empirical studies related to music and musical cognition, and dispense a few myths about music along the way.

Some of Sam's work discussed in this episode include Music in the Home: New Evidence for an Intergenerational Link,Two randomized trials provide no consistent evidence for nonmusical cognitive benefits of brief preschool music enrichment, and Miscommunication of science: music cognition research in the popular press. Additional topics we discussed are also covered in a Harvard Gazette article featuring Sam titled Muting the Mozart effect.

You can follow Sam on twitter via @samuelmehr.

16 Dec 2024Customizing a Graph Solution00:38:07

In this episode, Dave Bechberger, principal Graph Architect at AWS and author of "Graph Databases in Action", brings deep insights into the field of graph databases and their applications.


Together we delve into specific scenarios in which Graph Databases provide unique solutions, such as in the fraud industry, and learn how to optimize our DB for questions around connections, such as "How are these entities related?" or "What patterns of interaction indicate anomalies?"

This discussion sheds light on when organizations should consider adopting graph databases, particularly for cases that require scalable analysis of highly interconnected data and provides practical insights into leveraging graph databases for performance improvements in tasks that traditional relational databases struggle with.

27 Dec 2021Fashion Predictions00:34:42

Yusan Lin, a Research Scientist at Visa Research, comes on today to talk about her work "Predicting Next-Season Designs on High Fashion Runway."

24 Jan 2022Energy Forecasting Pipelines00:43:21

Erin Boyle, the Head of Data Science at Myst AI, joins us today to talk about her work with Myst AI, a time series forecasting platform and service with the objective for positively impacting sustainability.

https://docs.myst.ai/docs Visit Weights and Biases at wandb.me/dataskeptic Find Better Data Faster with Nomad Data. Visit nomad-data.com

22 Feb 2020Mathematical Models of Ecological Systems00:36:42
31 Dec 2019NLP in 201900:38:43

A year in recap.

15 Aug 2022Adwords with Unknown Budgets00:34:09

Rajan Udwani, an Assistant Professor at the University of California Berkeley joins us to discuss his work on AdWords with unknown budgets. He discussed the previous approaches to ad allocation, as well as his maiden approach that introduced randomization for better results. Listen for more.

15 Jan 2016Detecting Pseudo-profound BS00:37:37

A recent paper in the journal of Judgment and Decision Making titled On the reception and detection of pseudo-profound bullshit explores empirical questions around a reader's ability to detect statements which may sound profound but are actually a collection of buzzwords that fail to contain adequate meaning or truth. These statements are definitively different from lies and nonesense, as we discuss in the episode.

This paper proposes the Bullshit Receptivity scale (BSR) and empirically demonstrates that it correlates with existing metrics like the Cognitive Reflection Test, building confidence that this can be a useful, repeatable, empirical measure of a person's ability to detect pseudo-profound statements as being different from genuinely profound statements. Additionally, the correlative results provide some insight into possible root causes for why individuals might find great profundity in these statements based on other beliefs or cognitive measures.

The paper's lead author Gordon Pennycook joins me to discuss this study's results.

If you'd like some examples of pseudo-profound bullshit, you can randomly generate some based on Deepak Chopra's twitter feed.

To read other work from Gordon, check out his Google Scholar page and find him on twitter via @GordonPennycook.

And just for fun, if you think you've dreamed up a Data Skeptic related pseudo-profound bullshit statement, tweet it with hashtag #pseudoprofound. If I see an especially clever or humorous one, I might want to send you a free Data Skeptic sticker.

 
25 Jul 2014[MINI] Cross Validation

This miniepisode discusses the technique called Cross Validation - a process by which one randomly divides up a dataset into numerous small partitions. Next, (typically) one is held out, and the rest are used to train some model. The hold out set can then be used to validate how good the model does at describing/predicting new data.

29 Jan 2025Auditing LLMs and Twitter00:40:26

Our guests, Erwan Le Merrer and Gilles Tredan, are long-time collaborators in graph theory and distributed systems. They share their expertise on applying graph-based approaches to understanding both large language model (LLM) hallucinations and shadow banning on social media platforms.

In this episode, listeners will learn how graph structures and metrics can reveal patterns in algorithmic behavior and platform moderation practices.

Key insights include the use of graph theory to evaluate LLM outputs, uncovering patterns in hallucinated graphs that might hint at the underlying structure and training data of the models, and applying epidemic models to analyze the uneven spread of shadow banning on Twitter.

-------------------------------

Want to listen ad-free?  Try our Graphs Course?  Join Data Skeptic+ for $5 / month of $50 / year

https://plus.dataskeptic.com

18 Dec 2015Wikipedia Revision Scoring as a Service00:42:56

In this interview with Aaron Halfaker of the Wikimedia Foundation, we discuss his research and career related to the study of Wikipedia. In his paper The Rise and Decline of an open Collaboration Community, he highlights a trend in the declining rate of active editors on Wikipedia which began in 2007. I asked Aaron about a variety of possible hypotheses for the phenomenon, in particular, how automated quality control tools that revert edits automatically could play a role. This lead Aaron and his collaborators to develop Snuggle, an optimized interface to help Wikipedians better welcome new comers to the community.

We discuss the details of these topics as well as ORES, which provides revision scoring as a service to any software developer that wants to consume the output of their machine learning based scoring.

You can find Aaron on Twitter as @halfak.

01 May 2022Does Remote Learning Work?00:48:10
We explore this complex question in two interviews today.  First, Kasey Wagoner describes 3 approaches to remote lab sessions and an analysis of which was the most instrumental to students.  Second, Tahiya Chowdhury shares insights about the specific features of video-conferencing platforms that are lacking in comparison to in-person learning.

Click here for additional show notes on our website!

Thanks to our sponsor!
ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.

 

18 Jul 2023A Long Way Till AGI00:37:27

Our guest today is Maciej Świechowski. Maciej is affiliated with QED Software and QED Games. He has a Ph.D. in Systems Research from the Polish Academy of Sciences. Maciej joins us to discuss findings from his study, Deep Learning and Artificial General Intelligence: Still a Long Way to Go.

01 Mar 2019seq2seq00:21:41

A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.

The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.

In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.

Related Links

02 Feb 2018Evolutionary Computation00:24:44

In this week’s episode, Kyle is joined by Risto Miikkulainen, a professor of computer science and neuroscience at the University of Texas at Austin. They talk about evolutionary computation, its applications in deep learning, and how it’s inspired by biology. They also discuss some of the things Sentient Technologies is working on in stock and finances, retail, e-commerce and web design, as well as the technology behind it-- evolutionary algorithms.

03 Oct 2014[MINI] Selection Bias00:14:31

A discussion about conducting US presidential election polls helps frame a converation about selection bias.

19 Sep 2022The Harms of Targeted Weight Loss Ads00:34:57

Liza Gak, a Ph.D. student at UC Berkeley, joins us to discuss her research on harmful weight loss advertising. She discussed how weight loss ads are not fact-checked, and how they typically target the most vulnerable. She extensively discussed her interview process, data analysis, and results. Listen for more!

31 Jan 2022Explainable Climate Science00:34:50

Zack Labe, a Post-Doctoral Researcher at Colorado State University, joins us today to discuss his work “Detecting Climate Signals using Explainable AI with Single Forcing Large Ensembles.”
Works Mentioned
“Detecting Climate Signals using Explainable AI with Single Forcing Large Ensembles”
by Zachary M. Labe, Elizabeth A. Barnes

Sponsored by:
Astrato
and
BBEdit by Bare Bones Software

17 Jan 2023Do Results Generalize for Privacy and Security Surveys00:40:21

Today, Jenny Tang, a Ph.D. student of societal computing at Carnegie Mellon University discusses her work on the generalization of privacy and security surveys on platforms such as Amazon MTurk and Prolific. Jenny shared the drawbacks of using such online platforms, the discrepancies observed about the samples drawn, and key insights from her results.

26 Jun 2023AGI Can Be Safe00:45:57

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands.

Koen started the conversation with his take on an AI apocalypse in the coming years. He discussed the obedience problem with AI models and the safe form of obedience.

Koen explained the concept of Markov Decision Process (MDP) and how it is used to build machine learning models.

Koen spoke about the problem of AGIs not being able to allow changing their utility function after the model is deployed. He shared another alternative approach to solving the problem. He shared how to engineer AGI systems now and in the future safely. He also spoke about how to implement safety layers on AI models.

Koen discussed the ultimate goal of a safe AI system and how to check that an AI system is indeed safe. He discussed the intersection between large language Models (LLMs) and MDPs. He shared the key ingredients to scale the current AI implementations.

09 Jan 2015[MINI] Data Provenance00:10:56

This episode introduces a high level discussion on the topic of Data Provenance, with more MINI episodes to follow to get into specific topics. Thanks to listener Sara L who wrote in to point out the Data Skeptic Podcast has focused alot about using data to be skeptical, but not necessarily being skeptical of data.

Data Provenance is the concept of knowing the full origin of your dataset. Where did it come from? Who collected it? How as it collected? Does it combine independent sources or one singular source? What are the error bounds on the way it was measured? These are just some of the questions one should ask to understand their data. After all, if the antecedent of an argument is built on dubious grounds, the consequent of the argument is equally dubious.

For a more technical discussion than what we get into in this mini epiosode, I recommend A Survey of Data Provenance Techniques by authors Simmhan, Plale, and Gannon.

28 Nov 2017Azure Databricks00:28:27

I sat down with Ali Ghodsi, CEO and found of Databricks, and John Chirapurath, GM for Data Platform Marketing at Microsoft related to the recent announcement of Azure Databricks.

When I heard about the announcement, my first thoughts were two-fold.  First, the possibility of optimized integrations with existing Azure services.  This would be a big benefit to heavy Azure users who also want to use Spark.  Second, the benefits of active directory to control Databricks access for large enterprise.

Hear Ali and JG's thoughts and comments on what makes Azure Databricks a novel offering.

 

12 Jul 2021N-Beats00:34:15

Today on the show we have Boris Oreshkin @boreshkin, a Senior Research Scientist at Unity Technologies, who joins us today to talk about his work N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting.

Works Mentioned:
N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting
By Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio
https://arxiv.org/abs/1905.10437

Social Media
Linkedin

Twitter 

22 Sep 2017[MINI] One Shot Learning00:17:39

One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples.  This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model.

In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each.  Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data?  We discuss some of the reasons why and approaches to One Shot Learning.

09 May 2022Remote Productivity00:29:48
It is difficult to estimate the effect on remote working across the board. Darja Šmite, who speaks with us today, is a professor of Software Engineering at the Blekinge Institute of Technology. In her recently published paper, she analyzed data on several companies' activities before and after remote working became prevalent. She discussed the results found, why they were and some subtle drawbacks of remote working. Check it out!

 

Click here for additional show notes on our website!

27 Nov 2019ML Ops00:36:31

Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.

18 Aug 2017[MINI] Recurrent Neural Networks00:17:06

RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely on the state at a previous time, thus giving the network a form of memory.  RNNs have been used effectively in language analysis, translation, speech recognition, and many other tasks.

17 Sep 2014Game Science Dice with Louis Zocchi00:47:28

In this bonus episode, guest Louis Zocchi discusses his background in the gaming industry, specifically, how he became a manufacturer of dice designed to produce statistically uniform outcomes. 

During the show Louis mentioned a two part video listeners might enjoy: part 1 and part 2 can both be found on youtube. 

Kyle mentioned a robot capable of unnoticably cheating at Rock Paper Scissors / Ro Sham Bo. More details can be found here

Louis mentioned dice collector Kevin Cook whose website is DiceCollector.com 

While we're on the subject of table top role playing games, Kyle recommends these two related podcasts listeners might enjoy: 

The Conspiracy Skeptic podcast (on which host Kyle was recently a guest) had a great episode "Dungeons and Dragons - The Devil's Game?" which explores claims of D&Ds alleged ties to skepticism. 

Also, Kyle swears there's a great Monster Talk episode discussing claims of a satanic connection to Dungeons and Dragons, but despite mild efforts to locate it, he came up empty. Regardless, listeners of the Data Skeptic Podcast are encouraged to explore the back catalog to try and find the aforementioned episode of this great podcast. 

Last but not least, as mentioned in the outro, awesomedice.com did some great independent empirical testing that confirms Game Science dice are much closer to the desired uniform distribution over possible outcomes when compared to one leading manufacturer.

19 Dec 2014Economic Modeling and Prediction, Charitable Giving, and a Follow Up with Peter Backus00:23:43

Economist Peter Backus joins me in this episode to discuss a few interesting topics. You may recall Linhda and I previously discussed his paper "The Girlfriend Equation" on a recent mini-episode. We start by touching base on this fun paper and get a follow up on where Peter stands years after writing w.r.t. a successful romantic union. Additionally, we delve in to some fascinating economics topics.

We touch on questions of the role models, for better or for worse, played a role in the ~2008 economic crash, statistics in economics and the difficulty of measurement, and some insightful discussion about the economics charities. Peter encourages listeners to be open to giving money to charities that are good at fundraising, and his arguement is a (for me) suprisingly insightful logic. Lastly, we have a teaser of some of Peter's upcoming work using unconventional data sources.

For his benevolent recommendation, Peter recommended the book The Conquest of Happiness by Bertrand Russell, and for his self-serving recommendation, follow Peter on twitter at @Awesomnomics.

21 Mar 2023The Panel Study of Income Dynamics00:34:03

Noura Insolera, a Research Investigator with the Panel Study of Income Dynamics (PSID), joins us to share how PSID conducts longitudinal household surveys. She also shared some interesting findings from their data exploration, particularly on the observation and trends in food insecurity.

22 Jan 2025Fraud Detection with Graphs00:37:23

In this episode, Šimon Mandlík, a PhD candidate at the Czech Technical University will talk with us about leveraging machine learning and graph-based techniques for cybersecurity applications.

We'll learn how graphs are used to detect malicious activity in networks, such as identifying harmful domains and executable files by analyzing their relationships within vast datasets.

This will include the use of hierarchical multi-instance learning (HML) to represent JSON-based network activity as graphs and the advantages of analyzing connections between entities (like clients, domains etc.).

Our guest shows that while other graph methods (such as GNN or Label Propagation) lack in scalability or having trouble with heterogeneous graphs, his method can tackle them because of the "locality assumption" – fraud will be a local phenomenon in the graph – and by relying on this assumption, we can get faster and more accurate results.

-------------------------------

Want to listen ad-free?  Try our Graphs Course?  Join Data Skeptic+ for $5 / month of $50 / year

https://plus.dataskeptic.com

06 Jun 2023Evaluating Jokes with LLMs00:43:11

Fabricio Goes, a Lecturer in Creative Computing at the University of Leicester, joins us today. Fabricio discussed what creativity entails and how to evaluate jokes with LLMs. He specifically shared the process of evaluating jokes with GPT-3 and GPT-4. He concluded with his thoughts on the future of LLMs for creative tasks.

29 May 2023Why Machines Will Never Rule the World00:55:15

Barry Smith and Jobst Landgrebe, authors of the book “Why Machines will never Rule the World,” join us today. They discussed the limitations of AI systems in today’s world. They also shared elaborate reasons AI will struggle to attain the level of human intelligence.

14 Jan 2020Algorithmic Fairness00:42:10

This episode includes an interview with Aaron Roth author of The Ethical Algorithm.

13 Nov 2015[MINI] Bias Variance Tradeoff00:13:35

A discussion of the expected number of cars at a stoplight frames today's discussion of the bias variance tradeoff. The central ideal of this concept relates to model complexity. A very simple model will likely generalize well from training to testing data, but will have a very high variance since it's simplicity can prevent it from capturing the relationship between the covariates and the output. As a model grows more and more complex, it may capture more of the underlying data but the risk that it overfits the training data and therefore does not generalize (is biased) increases. The tradeoff between minimizing variance and minimizing bias is an ongoing challenge for data scientists, and an important discussion for skeptics around how much we should trust models.

06 Nov 2015Big Data Doesn't Exist00:32:28

The recent opinion piece Big Data Doesn't Exist on Tech Crunch by Slater Victoroff is an interesting discussion about the usefulness of data both big and small. Slater joins me this episode to discuss and expand on this discussion.

Slater Victoroff is CEO of indico Data Solutions, a company whose services turn raw text and image data into human insight. He, and his co-founders, studied at Olin College of Engineering where indico was born. indico was then accepted into the "Techstars Accelarator Program" in the Fall of 2014 and went on to raise $3M in seed funding. His recent essay "Big Data Doesn't Exist" received a lot of traction on TechCrunch, and I have invited Slater to join me today to discuss his perspective and touch on a few topics in the machine learning space as well.

01 Jul 2016[MINI] Leakage00:12:00

If you'd like to make a good prediction, your best bet is to invent a time machine, visit the future, observe the value, and return to the past. For those without access to time travel technology, we need to avoid including information about the future in our training data when building machine learning models. Similarly, if any other feature whose value would not actually be available in practice at the time you'd want to use the model to make a prediction, is a feature that can introduce leakage to your model.

30 Sep 2016Election Predictions00:21:44

Jo Hardin joins us this week to discuss the ASA's Election Prediction Contest. This is a competition aimed at forecasting the results of the upcoming US presidential election competition. More details are available in Jo's blog post found here.

You can find some useful R code for getting started automatically gathering data from 538 via Jo's github and official contest details are available here. During the interview we also mention Daily Kos and 538.

27 Oct 2017[MINI] Turing Machines00:13:54

TMs are a model of computation at the heart of algorithmic analysis.  A Turing Machine has two components.  An infinitely long piece of tape (memory) with re-writable squares and a read/write head which is programmed to change it's state as it processes the input.  This exceptionally simple mechanical computer can compute anything that is intuitively computable, thus says the Church-Turing Thesis.

Attempts to make a "better" Turing Machine by adding things like additional tapes can make the programs easier to describe, but it can't make the "better" machine more capable.  It won't be able to solve any problems the basic Turing Machine can, even if it perhaps solves them faster.

An important concept we didn't get to in this episode is that of a Universal Turing Machine.  Without the prefix, a TM is a particular algorithm.  A Universal TM is a machine that takes, as input, a description of a TM and an input to that machine, and subsequently, simulates the inputted machine running on the given input.

Turing Machines are a central idea in computer science.  They are central to algorithmic analysis and the theory of computation.

05 Oct 2020Retraction Watch00:32:04

Ivan Oransky joins us to discuss his work documenting the scientific peer-review process at retractionwatch.com.

 

23 Dec 20162016 Holiday Special00:39:33

Today's episode is a reading of Isaac Asimov's Franchise.  As mentioned on the show, this is just a work of fiction to be enjoyed and not in any way some obfuscated political statement.  Enjoy, and happy holidays!

10 Feb 2025LLMs and Graphs Synergy00:34:47

In this episode, Garima Agrawal, a senior researcher and AI consultant, brings her years of experience in data science and artificial intelligence. Listeners will learn about the evolving role of knowledge graphs in augmenting large language models (LLMs) for domain-specific tasks and how these tools can mitigate issues like hallucination in AI systems.

Key insights include how LLMs can leverage knowledge graphs to improve accuracy by integrating domain expertise, reducing hallucinations, and enabling better reasoning.

Real-life applications discussed range from enhancing customer support systems with efficient FAQ retrieval to creating smarter AI-driven decision-making pipelines.

Garima’s work highlights how blending static knowledge representation with dynamic AI models can lead to cost-effective, scalable, and human-centered AI solutions.

-------------------------------

Want to listen ad-free?  Try our Graphs Course?  Join Data Skeptic+ for $5 / month of $50 / year

https://plus.dataskeptic.com

26 Jun 2020Interpretability Practitioners00:32:07
15 Mar 2019Simultaneous Translation at Baidu00:24:10

While at NeurIPS 2018, Kyle chatted with Liang Huang about his work with Baidu research on simultaneous translation, which was demoed at the conference.

03 Dec 2019Team Data Science Process00:41:24

Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.

 

23 Dec 2023I LLM and You Can Too00:23:52

It took a massive financial investment for the first large language models (LLMs) to be created.  Did their corporate backers lock these tools away for all but the richest?  No.  They provided comodity priced API options for using them.  Anyone can talk to Chat GPT or Bing.  What if you want to go a step beyond that and do something programatic?  Kyle explores your options in this episode.

21 Jun 2022Algorithmic PPC Management00:43:56

Effectively managing a large budget of pay per click advertising demands software solutions. When spending multi-million dollar budgets on hundreds of thousands of keywords, an effective algorithmic strategy is required to optimize marketing objectives.

In this episode, Nathan Janos joins us to share insights from his work in the ad tech industry.

Click for additional show notes

Thanks to our sponsor!
https://wandb.com/ The developer-first MLOps platform. Build better models faster with experiment tracking, dataset versioning, and model management.

18 Feb 2022Tracking Elephant Clusters00:26:27

In today’s episode, Gregory Glatzer explained his machine learning project that involved the prediction of elephant movement and settlement, in a bid to limit the activities of poachers. He used two machine learning algorithms, DBSCAN and K-Means clustering at different stages of the project. Listen to learn about why these two techniques were useful and what conclusions could be drawn.

Click here to see additional show notes on our website!

Thanks to our sponsor, Astrato

15 Jan 2021Even Cooperative Chess is Hard00:23:09

Aside from victory questions like “can black force a checkmate on white in 5 moves?” many novel questions can be asked about a game of chess. Some questions are trivial (e.g. “How many pieces does white have?") while more computationally challenging questions can contribute interesting results in computational complexity theory.

In this episode, Josh Brunner, Master's student in Theoretical Computer Science at MIT, joins us to discuss his recent paper Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard.

Works Mentioned
Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard
by Josh Brunner, Erik D. Demaine, Dylan Hendrickson, and Juilian Wellman

1x1 Rush Hour With Fixed Blocks is PSPACE Complete
by Josh Brunner, Lily Chung, Erik D. Demaine, Dylan Hendrickson, Adam Hesterberg, Adam Suhl, Avi Zeff

18 Nov 2024Lessons from eGamer Networks00:37:52

Alex Bisberg, a PhD candidate at the University of Southern California, specializes in network science and game analytics, with a focus on understanding social and competitive success in multiplayer online games.

In this episode, listeners can expect to learn from a network perspective about players interactions and patterns of behavior. Through his research on games, Alex sheds light on how network analysis and statistical tests might explain positive contagious behaviors, such as generosity, and explore the dynamics of collaboration and competition in gaming environments. These insights offer valuable lessons not only for game developers in enhancing player experience, engagement and retention, but also for anyone interested in understanding the ways that virtual interactions shape social networks and behavior.

21 Oct 2016[MINI] Calculating Feature Importance00:13:04

For machine learning models created with the random forest algorithm, there is no obvious diagnostic to inform you which features are more important in the output of the model. Some straightforward but useful techniques exist revolving around removing a feature and measuring the decrease in accuracy or Gini values in the leaves. We broadly discuss these techniques in this episode.

20 Nov 2023A Survey Assessing Github Copilot00:26:25

In this episode, we are joined by Jenny Liang, a PhD student at Carnegie Mellon University, where she studies the usability of code generation tools. She discusses her recent survey on the usability of AI programming assistants.

Jenny discussed the method she used to gather people to complete her survey. She also shared some questions in her survey alongside vital takeaways. She shared the major reasons for developers not wanting to us code-generation tools. She stressed that the code-generation tools might access the software developers' in-house code, which is intellectual property.

Learn more about Jenny Liang via https://jennyliang.me/

 

05 May 2017[MINI] Generative Adversarial Networks00:09:51

GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other.

In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator's false images can be novel and interesting on their own.

The concept was first introduced in the paper Generative Adversarial Networks.

22 Apr 2016[MINI] Auto-correlative functions and correlograms00:14:58
When working with time series data, there are a number of important diagnostics one should consider to help understand more about the data. The auto-correlative function, plotted as a correlogram, helps explain how a given observations relates to recent preceding observations. A very random process (like lottery numbers) would show very low values, while temperature (our topic in this episode) does correlate highly with recent days.
 
See the show notes with details about Chapel Hill, NC weather data by visiting:
 
 
05 Oct 2018Cultural Cognition of Scientific Consensus00:31:48

In this episode, our guest is Dan Kahan about his research into how people consume and interpret science news.

In an era of fake news, motivated reasoning, and alternative facts, important questions need to be asked about how people understand new information.

Dan is a member of the Cultural Cognition Project at Yale University, a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs.

In a paper titled Cultural cognition of scientific consensus, Dan and co-authors Hank Jenkins‐Smith and Donald Braman discuss the "cultural cognition of risk" and establish experimentally that individuals tend to update their beliefs about scientific information through a context of their pre-existing cultural beliefs. In this way, topics such as climate change, nuclear power, and conceal-carry handgun permits often result in people.

The findings of this and other studies tell us that on topics such as these, even when people are given proper information about a scientific consensus, individuals still interpret those results through the lens of their pre-existing cultural beliefs.

The ‘cultural cognition of risk’ refers to the tendency of individuals to form risk perceptions that are congenial to their values. The study presents both correlational and experimental evidence confirming that cultural cognition shapes individuals’ beliefs about the existence of scientific consensus, and the process by which they form such beliefs, relating to climate change, the disposal of nuclear wastes, and the effect of permitting concealed possession of handguns. The implications of this dynamic for science communication and public policy‐making are discussed.

Améliorez votre compréhension de Data Skeptic avec My Podcast Data

Chez My Podcast Data, nous nous efforçons de fournir des analyses approfondies et basées sur des données tangibles. Que vous soyez auditeur passionné, créateur de podcast ou un annonceur, les statistiques et analyses détaillées que nous proposons peuvent vous aider à mieux comprendre les performances et les tendances de Data Skeptic. De la fréquence des épisodes aux liens partagés en passant par la santé des flux RSS, notre objectif est de vous fournir les connaissances dont vous avez besoin pour vous tenir à jour. Explorez plus d'émissions et découvrez les données qui font avancer l'industrie du podcast.
© My Podcast Data