Beta

Explorez tous les épisodes de Argmax

Plongez dans la liste complète des épisodes de Argmax. Chaque épisode est catalogué accompagné de descriptions détaillées, ce qui facilite la recherche et l'exploration de sujets spécifiques. Suivez tous les épisodes de votre podcast préféré et ne manquez aucun contenu pertinent.

Rows per page:

1–17 of 17

DateTitreDurée
21 Feb 20221: Reward is Enough00:54:36

This is the first episode of Argmax! We talk about our motivations for doing a podcast, and what we hope listeners will get out of it.

Todays paper: Reward is Enough

Summary of the paper
The authors present the Reward is Enough hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

Highlights of discussion

  • High level overview of Reinforcement Learning
  • How evolution can be encoded as a reward maximization problem
  • What is the one reward signal we are trying to optimize?
07 Mar 20222: data2vec00:53:23

Todays paper: data2vec (https://arxiv.org/abs/2202.03555)

Summary of the paper
A multimodal SSL algorithm that predicts latent representation of different types of input.

Highlights of discussion

  • What are the motivations of SSL and multimodal
  • How does the student teacher learning work?
  • What are similarities and differences between ViT, BYOL, and Reinforcement Learning algorithms.
21 Mar 20223: VICReg00:44:46

Todays paper: VICReg (https://arxiv.org/abs/2105.04906)

Summary of the paper
VICReg prevents representation collapse using a mixture of variance, invariance and covariance when calculating the loss. It does not require negative samples and achieves great performance on downstream tasks.

Highlights of discussion

  • The VICReg architecture (Figure 1)
  • Sensitivity to hyperparameters (Table 7)
  • Top 5 metric usefulness
06 Apr 20224: Can Neural Nets Learn the Same Model Twice?00:55:23

Todays paper: Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility
and Double Descent from the Decision Boundary Perspective (https://arxiv.org/pdf/2203.08124.pdf)

Summary:
A discussion of reproducibility and double descent through visualizations of decision boundaries.

Highlights of the discussion:

  • Relationship between model performance and reproducibility
  • Which models are robust and reproducible
  • How they calculate the various scores



26 Apr 20225: QMIX00:42:06

We talk about QMIX https://arxiv.org/abs/1803.11485 as an example of Deep Multi-agent RL.

06 Jun 20226: Deep Reinforcement Learning at the Edge of the Statistical Precipice01:01:08

We discuss NeurIPS outstanding paper award winning paper, talking about important topics surrounding metrics and reproducibility.

14 Jun 20227: Deep Unsupervised Learning Using Nonequilibrium Thermodynamics (Diffusion Models)00:30:55

We start talking about diffusion models as a technique for generative deep learning.

29 Jul 20228: GATO (A Generalist Agent)00:44:51

Today we talk about GATO, a multi-modal, multi-task, multi-embodiment generalist agent.

29 Jul 20229: Heads-Up Limit Hold'em Poker Is Solved00:47:55

Today we talk about recent AI advances in Poker; specifically the use of counterfactual regret minimization to solve the game of 2-player Limit Texas Hold'em.

23 Aug 202210: Outracing champion Gran Turismo drivers with deep reinforcement learning00:54:50

We discuss Sony AI's accomplishment of creating a novel AI agent that can beat professional racers in Gran Turismo. Some topics include:
- The crafting of rewards to make the agent behave nicely
- What is QR-SAC?
- How to deal with "rare" experiences in the replay buffer

Link to paper: https://www.nature.com/articles/s41586-021-04357-7

30 Sep 202211: CVPR Workshop on Autonomous Driving Keynote by Ashok Elluswamy, a Tesla engineer00:48:51

In this episode we discuss this video: https://youtu.be/jPCV4GKX9Dw

How Tesla approaches collision detection with novel methods.

25 Oct 202212: SIRENs00:54:17

In this episode we talked about "Implicit Neural Representations with Periodic Activation Functions" and the strength of periodic non-linearities.

11 Mar 202313: AlphaTensor00:49:05

We talk about AlphaTensor, and how researchers were able to find a new algorithm for matrix multiplication.

17 Mar 202314: Whisper00:49:14
This week we talk about Whisper. It is a weakly supervised speech recognition model.



28 Mar 202315: InstructGPT00:57:27

In this episode we discuss the paper "Training language models to follow instructions with human feedback" by Ouyang et al (2022). We discuss the RLHF paradigm and how important RL is to tuning GPT.

02 Sep 2023LoRA01:02:56

We talk about Low Rank Approximation for fine tuning Transformers. We are also on YouTube now! Check out the video here: https://youtu.be/lLzHr0VFi3Y

08 Oct 2024Mixture of Experts00:54:46

In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.

Améliorez votre compréhension de Argmax avec My Podcast Data

Chez My Podcast Data, nous nous efforçons de fournir des analyses approfondies et basées sur des données tangibles. Que vous soyez auditeur passionné, créateur de podcast ou un annonceur, les statistiques et analyses détaillées que nous proposons peuvent vous aider à mieux comprendre les performances et les tendances de Argmax. De la fréquence des épisodes aux liens partagés en passant par la santé des flux RSS, notre objectif est de vous fournir les connaissances dont vous avez besoin pour vous tenir à jour. Explorez plus d'émissions et découvrez les données qui font avancer l'industrie du podcast.
© My Podcast Data