
Knowledge Distillation with Helen Byrne (Helen Byrne)
Explore every episode of Knowledge Distillation with Helen Byrne
Pub. Date | Title | Duration | |
---|---|---|---|
04 Dec 2023 | Introducing... Knowledge Distillation | 00:00:56 | |
First episode coming soon... | |||
04 Dec 2023 | Danijela Horak - Head of AI Research, BBC R&D | 00:38:54 | |
Danijela Horak explains how the BBC is making use of AI and its plans for the future, including detecting deepfakes as well as using deepfake technology as part of its production process. | |||
05 Dec 2023 | Miranda Mowbray - Honorary Lecturer in Computer Science, University of Bristol: AI and Ethics | 00:33:38 | |
Miranda Mowbray is one of Britain’s leading thinkers on the ethics of Artificial Intelligence. After a long and distinguished career as a research scientist with HP, she is now an Honorary Lecturer in Computer Science at the University of Bristol where she specialises in ethics for AI, and data science for cybersecurity. In our wide-ranging conversation, Miranda breaks down the definition of AI ethics into its many constituent parts – including safety, transparency, non-discrimination and fairness. She tells us that there’s probably too much focus on the dire predictions of AI ‘doomers’ and not enough on the more immediate, but less apocalyptic outcomes. On a lighter note, Miranda reveals her personal mission to change the world, and shows off a sculpture that she had commissioned, based on the imaginings of generative AI. You can watch a video of our interview with Miranda here: https://youtu.be/tbnHxbM5ZR8 | |||
22 Dec 2023 | NeurIPS Special | 00:43:39 | |
NeurIPS is the world’s largest AI conference, where leading AI practitioners come together to share the latest research and debate the way forward for artificial intelligence. In this special episode, Helen examines some of the big themes of NeurIPS 2023 and talks to a range of attendees about their work, the big issues of the day, and what they’ve seen at NeurIPS that caught their attention. It’s fair to say that LLMs loomed large over this year’s conference, but there’s plenty more to discuss – from AI’s potential to combat climate change to new techniques for computational efficiency. Helen’s guests are: Sofia Liguori – Research Engineer at Google Deepmind, specialising in the application of AI to sustainability and climate change. Priya Donti – Assistant Professor in Electrical Engineering and Computer Science at MIT and Co-founder of Climate Change AI. Priya discusses the challenges associated with introducing leading-edge AI systems into highly complex real-world power generation and delivery systems. Irene Chen – Assistant Professor at UC Berkeley and UCSF’s Computational Precision Health program. Irene talks about her goal of delivering more equitable healthcare at a time when AI is set to disrupt the field. She also discusses the potential to make use of commercial LLMs in a way that protects sensitive user data. James Briggs – AI engineer at Graphcore. James and colleagues were presenting their paper ‘Training and inference of large language models using 8-bit floating point’ at this year’s NeurIPS. James explains their work and the importance of using smaller numerical representations to unlock computational efficiency in AI. Abhinav (Abhi) Venigalla – is a member of the technical staff at Databricks. The company provides a range of products to help organisations unlock the potential of enterprise-grade AI. Abhi talks about the increasing emphasis on inference tools and computational efficiency as AI moves out of the research lab and into commercial deployment. | |||
29 Jan 2024 | The rise of synthetic data with Florian Hönicke from Jina AI | 00:40:27 | |
Data is the fuel that is powering the AI revolution - but what do we do when there's just not enough data to satisfy the insatiable appetite of new model training? | |||
02 Feb 2024 | Papers of the Month with Charlie Blake, Research Engineer at Graphcore | 00:43:48 | |
Charlie Blake from Graphcore’s research team discusses their AI Papers of the Month for January 2024. | |||
09 Feb 2024 | Deepfakes deep dive with Nina Schick | 00:38:45 | |
Nina Schick is a leading commentator on Artificial Intelligence and its impact on business, geopolitics and humanity. Her book ‘Deepfakes and the Infocalypse’ charts the early use of gen AI to create deepfake pornography and the technology’s subsequent use as a tool of political manipulation. With over two decades of geopolitical experience, Nina has long been focused on macro-trends for society. She has advised global leaders, including Joe Biden, the President of the United States, and Anders Fogh Rasmussen, the former Secretary General of NATO. She has also worked with some of the world’s premier companies and organisations, including Microsoft, Adobe, DARPA, and the UN. A familiar face at technology conferences such as CES, TEDx, CogX and WebSummit, Nina is also a regular contributor to discussions about AI on the BBC, CNN, Sky News, Bloomberg and more. In her conversation with Helen, Nina outlines the continuing risks posed by deepfake technologies and the technological counter-measures that can be used to safeguard against them. You can watch the video of her interview on YouTube: https://youtu.be/f4zTbGWYan8
| |||
07 Mar 2024 | Inside OpenAI's trust and safety operation - with Rosie Campbell | 00:45:03 | |
No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies. | |||
07 Apr 2024 | Stable Diffusion 3 with Stability AI's Kate Hodesdon | 00:32:49 | |
Stability AI’s Stable Diffusion model is one of the best known and most widely used text-to-image systems. | |||
15 Apr 2024 | Neuroscience and AI with Basis co-founder Emily Mackevicius | 00:35:05 | |
Emily Mackevicius is a co-founder and director of Basis, a nonprofit applied research organization focused on understanding and building intelligence while advancing society’s ability to solve intractable problems.
|