
Deep Dive: AI (Deep Dive: AI)
Explore every episode of Deep Dive: AI
Pub. Date | Title | Duration | |
---|---|---|---|
23 Aug 2022 | Solving for AI’s black box problem | ||
The mystery that surrounds the possibilities and probabilities of AI is multilayered, and depending on your perspective and involvement with new technology, your access to reliable information and a clear picture of current progress could be obscured in several ways. On the podcast today, we welcome Alek Tarkowski, who is the Strategy Director of Open Future Foundation to talk about some of the ways we can tackle issues of security, safety, privacy, and basic human rights. Tarkowski is a sociologist, an activist, and a strategist and his engagement and insight into the current landscape are extremely helpful in understanding these complex issues and murky waters. In our chat, we get to unpack some foundational updates about what is currently going on in the space, regulations that have been deployed recently, and how activists and the industry can find themselves at odds when debating policy. Tarkowski makes a clear plea for all parties to get involved in these debates and stay involved in this powerful avenue for the molding of our future. To hear it all from Tarkowski on this central aspect of the future of AI, be sure to join us! Full transcript. Key Points From This Episode:
Links mentioned in today’s episode:
CreditsSpecial thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast. | |||
30 Aug 2022 | When hackers take on AI: Sci-fi – or the future? | ||
Because we lack a fundamental understanding of the internal mechanisms of current AI models, today’s guest has a few theories about what these models might do when they encounter situations outside of their training data, with potentially catastrophic results. Tuning in, you’ll hear from Connor Leahy, who is one of the founders of Eleuther AI, a grassroots collective of researchers working to open source AI research. He’s also Founder and CEO of Conjecture, a startup that is doing some fascinating research into the interpretability and safety of AI. We talk more about this in today’s episode, with Leahy elaborating on some of the technical problems that he and other researchers are running into and the creativity that will be required to solve them. We also take a look at some of the nefarious ways that he sees AI evolving in the future and how he believes computer security hackers could contribute to mitigating these risks without curbing technological progress. We close on an optimistic note, with Leahy encouraging young career researchers to focus on the ‘massive orchard’ of low-hanging fruit in interpretability and AI safety and sharing his vision for this extremely valuable field of research. To learn more, make sure not to miss this fascinating conversation with EleutherAI Founder, Connor Leahy! Full transcript. Key Points From This Episode:
Links Mentioned in Today’s Episode:
CreditsSpecial thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix. This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast. | |||
06 Sep 2022 | Building creative restrictions to curb AI abuse | ||
Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer’s point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie Mellon University and is investigating AI from an ethical perspective, honing in specifically on the ethics-related challenges faced by AI software engineers. His research has been conducted at Intel Labs, Microsoft, and NASA’s Jet Propulsion Lab. In this episode, we discuss the harmful uses of deep fakes and the ethical ramifications thereof in proprietary versus open source contexts. Widder breaks down the notions of technological inevitability and technological neutrality, respectively, and explains the importance of challenging these ideas. Widder has identified a continuum between implementation-based harms and use-based harms and fills us in on how each is affected in the open source development space. Tune in to find out more about the importance of curbing AI abuse and the creativity required to do so, as well as the strengths and weaknesses of open source in terms of AI ethics. Full transcript. Key points from this episode:
Links mentioned in today’s episode:
CreditsSpecial thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix. This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast. | |||
13 Sep 2022 | Why Debian won’t distribute AI models any time soon | ||
Welcome to a brand new episode of Deep Dive: AI! For today’s conversation, we are joined by Mo Zhou, a PhD student at Johns Hopkins University and an official Debian developer since 2018. Tune in as Mo speaks to the evolving role of artificial intelligence driven by big data and hardware capacity and shares some key insights into what sets AlphaGo apart from previous algorithms, making applications integral, and the necessity of releasing training data along with any free software. You’ll also learn about validation data and the difference powerful hardware makes, as well as why Debian is so strict about their practice of offering free software. Finally, Mo shares his predictions for the free software community (and what he would like to see happen in an ideal world) before sharing his own plans for the future, which include a strong element of research. If you’re looking to learn about the uphill climb for open source artificial intelligence, plus so much more, you won’t want to miss this episode! Full transcript. Key points from this episode:
Links mentioned in today’s episode:
CreditsSpecial thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix. This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast. | |||
09 Feb 2023 | How to secure AI systems | ||
With so many artificial systems claiming “intelligence” available to the public, making sure they do what they’re designed to is of the utmost importance. Dr. Bruce Draper, Program Manager of the Information Innovation Office at DARPA joins us on this bonus episode of Deep Dive: AI to unpack his work in the field and his current role. We have a fascinating chat with Draper about the risks and opportunities involved in this exciting field, and why growing bigger and more involved Open Source communities is better for everyone. Draper introduces us to the Guaranteeing AI Robustness Against Deception (GARD) Project, its main short-term goals and how these aim to mitigate exposure to danger while we explore the possibilities that machine learning offer. We also spend time discussing the agency’s Open Source philosophy and foundation, the AI boom in recent years, why policy making is so critical, the split between academic and corporate contributions, and much more. For Draper, community involvement is critical to spot potential issues and threats. Tune in to hear it all from this exceptional guest! Read the full transcript. Key points from this episode:
Links mentioned in this episode:
CreditsSpecial thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix. This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast. | |||
16 Aug 2022 | Copyright, selfie monkeys, the hand of God | ||
What are the copyright implications for AI? Can artwork created by a machine register for copyright? These are some of the questions we answer in this episode of Deep Dive: AI, an Open Source Initiative that explores how Artificial Intelligence impacts the world around us. Here to help us unravel the complexities of today’s topic is Pamela Chestek, an Open Source lawyer, Chair of the OSI License Committee, and OSI Board member. She is an accomplished business attorney with vast experience in free and open source software, trademark law, and copyright law, as well as in advertising, marketing, licensing, and commercial contracting. Pamela is also the author of various scholarly articles and writes a blog focused on analyzing existing intellectual property case law. She is a respected authority on the subject and has given talks concerning Open Source software, copyright, and trademark matters. In today’s conversation, we learn the basics of copyright law and delve into its complexities regarding open source material. We also talk about the line between human and machine creations, whether machine learning software can be registered for copyright, how companies monetize Open Source software, the concern of copyright infringement for machine learning datasets, and why understanding copyright is essential for businesses. We also learn about some amazing AI technology that is causing a stir in the design world and hear some real-world examples of copyright law in the technology space. Tune in today to get insider knowledge with expert Pamela Chestek! Full transcript. Key Points From This Episode:
Links Mentioned in Today’s Episode:
CreditsSpecial thanks to the volunteer producer, Nicole Martinelli. | |||
26 Jul 2022 | Welcome to Deep Dive: AI | ||
Welcome to Deep Dive:AI, an online event from the Open Source Initiative. We’ll be exploring how Artificial Intelligence impacts open source software, from developers to businesses to the rest of us. Episode notesAn introduction to Deep Dive: AI, an event in three parts organized by the Open Source Initiative. With AI systems being so complex, concepts like “program” or “source code” in the Open Source Definition are challenged in new and surprising ways. The topic of AI is huge. For Open Source Initiative’s Deep Dive, we’ll be looking at how AI could affect the future of Open Source. This trailer episode is produced by the Open Source Initiative with the help of Nicole Martinelli. Music by Jason Shaw on Audionautix.com, Creative Commons BY 4.0 International license. Deep Dive: AI is made possible by the generous support of OSI individual members and sponsors. Donate or become a member of the OSI today. |