Recent Posts
The dark side of AI: recommend and manipulate (Ep. 90)
In 2017 a research group at the University of Washington did a study on the Black Lives Matter movement on Twitter. They constructed what they call a “shared audience graph” to analyse the different groups of audiences participating in the debate, and fo...
The dark side of AI: social media and the optimization of addiction
Chamath Palihapitiya, former Vice President of User Growth at Facebook, was giving a talk at Stanford University, when he said this: “I feel tremendous guilt. The short-term, dopamine-driven feedback loops that we have created are destroying how society ...
3 best solutions to improve training stability of GANs (Ep. 88)
Generative Adversarial Networks or GANs are very powerful tools to generate data. However, training a GAN is not easy. More specifically, GANs suffer of three major issues such as instability of the training procedure, mode collapse and vanishing gradien...
What if I train a neural network with random data? (with Stanisław Jastrzębski) (Ep. 87)
What happens to a neural network trained with random data? Are massive neural networks just lookup tables or do they truly learn something? Today’s episode will be about memorisation and generalisation in deep learning, with Stanislaw Jastrzębski from ...
Deep learning is easier when it is illustrated (with Jon Krohn) (Ep. 86)
In this episode I speak with Jon Krohn, author of Deeplearning Illustrated a book that makes deep learning easier to grasp. We also talk about some important guidelines to take into account whenever you implement a deep learning model, how to deal with ...
How to generate very large images with GANs (Ep. 85)
Join the discussion on our Discord server In this episode I explain how a research group from the University of Lubeck dominated the curse of dimensionality for the generation of large medical images with GANs. The problem is not as trivial as it seems. ...
More powerful deep learning with transformers (Ep. 84)
Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture. Such architecture is built on top of another important concept already known to the community: self-attention.In this episode I ...
Top 4 reasons why reinforcement learning sucks
We have seen agents playing Atari games or Alpha Go, doing financial trading and modeling natural language. After watching reinforcement learning agents doing great in some domains let me tell […]
Have you met Claude Shannon? (with Jimmy Soni and Rob Goodman)
Meet Claude Shannon, the father of information age. A biography by Jimmy Soni and Rob Goodman
[RB] Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 83)
Join the discussion on our Discord server In this episode, I am with Aaron Gokaslan, computer vision researcher, AI Resident at Facebook AI Research. Aaron is the author of OpenGPT-2, a parallel NLP model to the most discussed version that OpenAI d...

Discord community chat
Join our Discord community to discuss the show, suggest new episodes and chat with other listeners!
Support us