
Computer Says Maybe (Alix Dunn)
Explorez tous les épisodes de Computer Says Maybe
Date | Titre | Durée | |
---|---|---|---|
23 May 2024 | Protesting Project Nimbus: employee organising to end Google’s contract with Israel w/ Dr.Kate Sim | 00:51:42 | |
In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus. Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Project Nimbus, a joint contract between Google and Amazon to provide technology to the Israeli government and military. In this episode, Alix and Dr. Sim discuss technology-enabled violence, Dr. Sim's work in trust and safety, and Google's cancelled project Maven. They also talk about Dr. Sim's journey into protesting Project Nimbus, the many other voices fighting against the contract, and how Big Tech often obfuscates its responsibility in perpetuating violence. In the end, we arrive at a common lesson: solidarity is our main hope for change. This episode is hosted by Alix Dunn and our guest is Dr. Kate Sim. Further Reading
| |||
18 Jul 2024 | New mini-series: Exhibit X | 00:03:57 | |
In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for informing new regulations? What about a social media platform’s first amendment rights? So much to cover, so many episodes coming your way! | |||
21 Jun 2024 | What the FAccT?: Reformers and Radicals | 00:54:10 | |
In part 1 of our FAccT conference deep dive, Alix Dunn sits down with co-host Andrew Strait from the Ada Lovelace Institute to talk about the history of FAccT and some of the papers being presented at this year’s event. The Fairness, Accountability and Transparency Conference, or FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars and exploring how socio-technical systems could be built in a way that is compatible with a fair society. The seventh annual FAccT conference was held in Rio de Janeiro, Brazil, from Monday, June 3rd through Thursday, June 6th 2024 with over five hundred people in attendance. This episode is hosted by Alix Dunn and our Co-Host is Andrew Strait Further Reading:
| |||
16 Aug 2024 | Exhibit X: The Courts | 00:44:19 | |
Imagine: something horrible has happened and the only evidence you have is a video posted online. Can you submit it into evidence in court? Well, it’s complicated. In part 4 of our Exhibit X series, Alix sat down with Dr. Alexa Koenig to discuss her work with the International Criminal Court. Dr. Koenig and many colleagues are supporting the court to grapple with online evidence and tackling challenges that courts face when they adapt to our digital world. We answer questions like:
Alexa Koenig, PhD, JD, is Co-Faculty Director of the Human Rights Center , Director of HRC’s Investigations Program, and an adjunct professor at UC Berkeley School of Law, where she teaches classes that focus on the intersection of emerging technologies and human rights. She also co-teaches a class on open source investigative reporting at Berkeley Journalism. Alexa co-founded the Human Rights Center Investigations Lab, which trains students and professionals to use social media and other digital open source content to strengthen human rights research, reporting, and accountability. | |||
13 Sep 2024 | Bridging The Divide w/ Issie Lapowsky | 00:38:23 | |
There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy. But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves? To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored Bridging The Divide: Translating Research on Digital Media into Policy and Practice — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report. Further reading: Issie Lapowsky is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company. John Sands is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy. | |||
02 Aug 2024 | Exhibit X: The Whistleblower | 00:31:40 | |
In part 2 of Exhibit X, Alix interviewed Frances Haugen, who In 2021 blew the whistle on Meta; they were sitting on the knowledge that their products were harmful to kids, and yet — shocker — they continued to make design decisions that would keep kids engaged. Mark Zuckerberg worked hard on his image (it’s a hydrofoil, not a surfboard!), while Instagram was being used for human trafficking — the lack of care and accountability here absolutely melts the mind. What conversations did Frances’s whistleblowing start? Was whistleblowing an effective mechanism for accountability in this case? Do we have to add age verification to social media sites or break end-to-end encryption to keep children safe online? *Frances Haugen is a data scientist & engineer. In 2021 she disclosed 22,000 internal documents to The Wall Street Journal and the Securities & Exchanges Commission which demonstrated Meta’s knowledge of their products harms.* Your hosts this week are Alix Dunn and Prathm Juneja | |||
28 Jun 2024 | What the FAccT?: Abandoning Algorithms | 00:29:59 | |
In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro. In part two of our FAccT deep dive, Alix joins Nari Johnson and Sanika Moharana to discuss their paper “The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment”. Nari Johnson is a third-year PhD student in Carnegie Mellon University's Machine Learning Department, where she is advised by Hoda Heidari. She graduated from Harvard in 2021 with a BA and MS in Computer Science, where she previously worked with Finale Doshi-Velez. Sanika Moharana is a second-year PhD student in Human Computer Interaction at Carnegie Mellon University. As an advocate for human-centered design and research, Sanika practices iterative ideation and prototyping for multimodal interactions and interfaces across intelligent systems, connected smart devices, IOT’s, AI experiences, and emerging technologies . Further Reading | |||
30 Aug 2024 | Exhibit X: What did we learn? | 00:35:50 | |
That’s the END of Exhibit X folks; if you’ve been following along, congratulations on choosing to become smarter. If not that’s okay, consider this episode a delicious teaser for the series. In this episode Alix and Prathm engage their large wet brains and pull out the meatiest insights and learnings from the last five episodes. This series has been a delightful intellectual expedition into big tech litigation, knowledge creation, and online speech — if you’re a nerd for any of those things, it would be irresponsible for you to ignore this. Thank you for listening; we hope to do more deep explorations like this in the future! | |||
27 Sep 2024 | Will Newsom Veto the AI Safety Bill? w/ Teri Olle | 00:24:42 | |
What if we could have a public library for compute? But is… more compute really what we want right now? This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space. But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden. Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters. | |||
04 Oct 2024 | Chasing Away Sidewalk Labs w/ Bianca Wylie | 00:49:39 | |
In 2017 Google’s urban planning arm Sidewalk Labs came into Toronto and said “we’re going to turn this into a smart city”. Our guest Bianca Wylie was one of the people who stood up and said “okay but… who asked for this?” This is a story about how a large tech firm came into a community with big promises, and then left with its tail between its legs. In the episode Alix and Bianca discuss the complexities of government procurement of tech, and how attractive corporate solutions look when you’re so riddled with austerity. Bianca Wylie is a writer with a dual background in technology and public engagement. She is a partner at Digital Public and a co-founder of Tech Reset Canada. She worked for several years in the tech sector in operations, infrastructure, corporate training, and product management. Then, as a professional facilitator, she spent several years co-designing, delivering and supporting public consultation processes for various governments and government agencies. She founded the Open Data Institute Toronto in 2014 and co-founded Civic Tech Toronto in 2015. Further Reading: A Counterpublic Analysis of Sidewalk Toronto In Toronto, Google’s Attempt to Privatize Government Fails—For Now | |||
12 Jul 2024 | What the FAccT? Evidence of bias. Now what? | 00:25:09 | |
In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”. In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both? Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet & Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people. Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics. Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet & Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History & Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh. This episode is hosted by Alix Dunn. Our guests are Marta Ziosi and Dasha Prussi Further Reading | |||
09 Aug 2024 | Exhibit X: The Litigators | 00:31:47 | |
Often it feels as though the cases and lawsuits brought against big tech firms are continuously piling up, but there never seems to be any resulting justice or resolution. There are many good reasons for this, two of which are section 230 and the first amendment. Big Tech companies will routinely invoke 230 and the first amendment to get cases against them thrown out before they can go to trial. In part 3 of Exhibit X, Meetali Jain explains how litigators have been playing 4D chess to get the courts to hold these companies accountable. In this episode we ask…
Meetali Jain is a human rights lawyer, who founded the Tech Justice Law Project in 2023. The Project works with a collective of legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age, and that online spaces are safer and more accountable. This episode was hosted by Alix Dunn and Prathm Juneja | |||
05 Jul 2024 | What the FAccT? First law, bad law | 00:23:55 | |
In this episode, we speak with Lara Groves and Jacob Metcalf at the seventh annual FAccT conference in Rio de Janeiro. In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf to discuss their paper “ Auditing Work: Exploring the New York City algorithmic bias audit regime”. Lara Groves is a Senior Researcher at the Ada Lovelace Institute. Her most recent project explored the role of third-party auditing regimes in AI governance. Lara has previously led research on the role of public participation in commercial AI labs, and on algorithmic impact assessments. Her research interests include practical and participatory approaches to algorithmic accountability and innovative policy solutions to challenges of governance. Before joining Ada, Lara worked as a tech and internet policy consultant, and has experience in research, public affairs and campaigns for think-tanks, political parties and advocacy groups. Lara has an MSc in Democracy from UCL. Jacob Metcalf, PhD, is a researcher at Data & Society, where he leads the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies. Jake’s consulting firm, Ethical Resolve, provides a range of ethics services, helping clients to make well-informed, consistent, actionable, and timely business decisions that reflect their values. He also serves as the Ethics Subgroup Chair for the IEEE P7000 Standard. This episode is hosted by Alix Dunn. Our guests are Lara Groves and Jacob Metcalf. Further Reading
| |||
23 Aug 2024 | Exhibit X: The Community | 00:47:02 | |
What makes an expert witness? How does a socio-technical researcher become one? Now that we’re the end of this miniseries, we might finally be ready to answer these questions… In the fifth instalment of Exhibit X, civic tech acrobat Elizabeth Eagen shares her pithy insights on how researchers of emerging technologies are starting to interface with litigators and regulators. The questions we explore this week:
Elizabeth Eagen is Deputy Director of the Citizens and Technology Lab at Cornell University, which works with communities to study the effects of technology on society and test ideas for changing digital spaces to better serve the public interest. She was a 2022-23 Practitioner Fellow at the Digital Civil Society Lab at Stanford University, and serves as a board member at a number of nonprofit technology organizations. | |||
20 Sep 2024 | The stories we tell ourselves about AI | 00:37:33 | |
Applications for our second cohort of Media Mastery for New AI Protagonists are now open! Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. Apply by Sunday, September 29th!
But how reliable are our narrators? And how can we use story as strategy? The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game. This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI. In this ep we ask:
Hanna Barakat is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies. Jonathan Tanner founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy. Daniel Stone is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia. | |||
26 Jul 2024 | Exhibit X: Tech and Tobacco | 00:27:04 | |
Here is something you’re probably tired of hearing: Big Tech is responsible for a bottomless brunch of societal harms. And they are not being held accountable. Right now it feels as though we hear constantly about laws, regulation, courts. But none of it is effective in litigating against Big Tech. In our latest podcast series Exhibit X, we’re looking at how the tides might finally be turning. Legal accountability could be around the corner, but only if a few things happen first. To start, we look back to 1964. When Big Tobacco was winning the ‘try your best to profit from harm’ race. Research showed cigarettes were addictive and also caused cancer — and yet the industry evaded accountability for decades. In this episode we ask questions like:
Prathm Juneja was Alix’s co-host for this episode. He is a PhD Candidate in Social Data Science at the Oxford Internet Institute Working at the intersection of academia, industry, and government on technology, innovation, and policy. Further reading
| |||
06 Sep 2024 | Why was the CEO of Telegram just arrested? w/ Mallory Knodel | 00:44:35 | |
Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM. Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France? This week Alix is joined by Mallory Knodel to talk us through what happened:
Mallory Knodel is The Center for Democracy & Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition. | |||
11 Oct 2024 | Net 0++: Microsoft’s greenwashing w/ Holly Alpine | 00:42:07 | |
This week we’re kicking off a series about AI & the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade. Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0. Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside. Holly Alpine is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being. Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons. Further Reading: | |||
13 Feb 2024 | 2024 Elections: Is AI going to wreak havoc? | 00:35:25 | |
In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for the situation we’re in. This episode is hosted by Alix Dunn and Prathm Juneja, and guests include Sam Gregory, Josh Lawson, and Claire Wardle. If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email team@saysmaybe.com or share an audio note here: speakpipe.com/saysmaybe -- Further Reading Academic Articles
News Articles
Other Links | |||
19 Apr 2024 | The Human in the Loop: What's it like to work in the AI supply chain? | 00:39:52 | |
In this episode, we talk about the kinds of jobs that are being created as AI systems grow, how those jobs are evolving, what the labour conditions of those jobs are like and, who is benefitting from these systems. This episode is hosted by Alix Dunn and guests include James (Mojez) Oyange, Yoel Roth, Catherine Bracy and Cori Crider. If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email team@saysmaybe.com or share an audio note here: speakpipe.com/saysmaybe. Further Reading News Articles
Other Links |