Beta
Logo of the podcast AI & The Future of Humanity:  Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews

AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews (The Creative Process Original Series: Artificial Intelligence, Technology, Innovation, Engineering, Robotics & Internet of Things)

Explorez tous les épisodes de AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews

Plongez dans la liste complète des épisodes de AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews. Chaque épisode est catalogué accompagné de descriptions détaillées, ce qui facilite la recherche et l'exploration de sujets spécifiques. Suivez tous les épisodes de votre podcast préféré et ne manquez aucun contenu pertinent.

Rows per page:

1–50 of 104

DateTitreDurée
27 Oct 2023NICHOLAS CHRISTAKIS - Director of Human Nature Lab, Yale - Author of Blueprint: The Evolutionary Origins of a Good Society00:56:00

Nicholas Christakis, MD, PhD, MPH, is a social scientist and physician who conducts research in the areas of biosocial science, network science and behavioral genetics. He directs the Human Nature Lab at Yale University and is the co-director of the Yale Institute for Network Science. Dr. Christakis has authored numerous books, including Blueprint: The Evolutionary Origins of a Good Society published in 2019 and Apollo's Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live published in 2020. In 2009, Christakis was named by TIME magazine to their annual list of the 100 most influential people in the world.

“We're not attempting to invent super smart AI to replace human cognition. We are inventing dumb AI to supplement human interaction. Are there simple forms of artificial intelligence, simple programming of bots, such that when they are added to groups of humans – because those humans are smart or otherwise positively inclined - that help the humans to help themselves? Can we get groups of people to work better together, for instance, to confront climate change, or to reduce racism online, or to foster innovation within firms?
Can we have simple forms of AI that are added into our midst that make us work better together? And the work we're doing in that part of my lab shows that abundantly that's the case. And we published a stream of papers showing that we can do that.”

Nicholas Christakis humannaturelab.net/people/nicholas-christakis

Human Nature Lab: humannaturelab.net

Yale Institute for Network Science yins.yale.edu

sociology.yale.edu/people/nicholas-christakis

Blueprint: The Evolutionary Origins of a Good Society
Apollo's Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live

TRELLIS - Suite of software tools for developing, administering, and collecting survey and social network data: trellis.yale.edu.

The Atlantic: “How AI Will Rewire Us: For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship”

www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/

www.creativeprocess.info
www.oneplanetpodcast.org

27 Oct 2023Highlights - NICHOLAS CHRISTAKIS - Author of Blueprint: The Evolutionary Origins of a Good Society - Director of Human Nature Lab, Yale00:09:55

“We're not attempting to invent super smart AI to replace human cognition. We are inventing dumb AI to supplement human interaction. Are there simple forms of artificial intelligence, simple programming of bots, such that when they are added to groups of humans – because those humans are smart or otherwise positively inclined - that help the humans to help themselves? Can we get groups of people to work better together, for instance, to confront climate change, or to reduce racism online, or to foster innovation within firms?

Can we have simple forms of AI that are added into our midst that make us work better together? And the work we're doing in that part of my lab shows that abundantly that's the case. And we published a stream of papers showing that we can do that.”
Nicholas Christakis, MD, PhD, MPH, is a social scientist and physician who conducts research in the areas of biosocial science, network science and behavioral genetics. He directs the Human Nature Lab at Yale University and is the co-director of the Yale Institute for Network Science. Dr. Christakis has authored numerous books, including Blueprint: The Evolutionary Origins of a Good Society published in 2019 and Apollo's Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live published in 2020. In 2009, Christakis was named by TIME magazine to their annual list of the 100 most influential people in the world.

Nicholas Christakis: humannaturelab.net/people/nicholas-christakis

Human Nature Lab: humannaturelab.net

Yale Institute for Network Science: yins.yale.edu

sociology.yale.edu/people/nicholas-christakis

Blueprint: The Evolutionary Origins of a Good Society

Apollo's Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live

TRELLIS - Suite of software tools for developing, administering, and collecting survey and social network data: trellis.yale.edu.

The Atlantic: “How AI Will Rewire Us: For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship”

www.creativeprocess.info
www.oneplanetpodcast.org

27 Oct 2023NICK BOSTROM - Founding Director, Future of Humanity Institute, Oxford
 - Philosopher, Author00:42:22

Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.

He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.

Bostrom’s academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.

"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."

https://nickbostrom.com

https://www.fhi.ox.ac.uk

www.creativeprocess.info

www.oneplanetpodcast.org

27 Oct 2023Highlights - NICK BOSTROM - Author of Superintelligence: Paths, Dangers, Strategies - Founding Director, Future of Humanity Institute, Oxford00:11:19

"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."

Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.

He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.

Bostrom’s academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.

https://nickbostrom.com

https://www.fhi.ox.ac.uk

www.creativeprocess.info

www.oneplanetpodcast.org

02 Nov 2023ALLEN STEELE - Hugo Award-winning Science Fiction Author of the Coyote Trilogy, Arkwright00:43:55

What does the future of space exploration look like? How can we unlock the opportunities of outer space without repeating the mistakes of colonization and exploitation committed on Earth? How can we ensure AI and new technologies reflect our values and the world we want to live in?

 Allen Steele is a science fiction author and journalist. He has written novels, short stories, and essays and been awarded a number of Hugos, Asimov's Readers, and Locus Awards. He’s known for his Coyote Trilogy and Arkwright. He is a former member of the Board of Directors and Board of Advisors for the Science Fiction and Fantasy Writers of America. He has also served as an advisor for the Space Frontier Foundation. In 2001, he testified before the Subcommittee on Space and Aeronautics of the U.S. House of Representatives in hearings regarding space exploration in the 21st century.

"I'm really very glad. I was happy to see that within my lifetime that the prospects of not just Mars, but in fact interstellar space is being taken seriously. I've been at two conferences where we were talking about building the first starship within this century. One of my later books, Arkwright, is about such a project. I saw that Elon Musk is building Starship One, I wish him all the best. And I envy anybody who goes.

I wish I were a younger person and in better health. Somebody asked me some time ago, would you go to Mars? And I said, 'I can't do it now. I've got a bum pancreas, and I'm 65 years old, and I'm not exactly the prime prospect for doing this. If you asked me 40 years ago would I go, I would have said: in a heartbeat!' I would gladly leave behind almost everything. I don't think I'd be glad about leaving my wife and family behind, but I'd be glad to go live on another planet, perhaps for the rest of my life, just for the chance to explore a new world, to be one of the settlers in a new world.

And I think this is something that's being taken seriously. It is very possible. We've got to be careful about how we do this. And we've got to be careful, particularly about the rationale of the people who are doing this. It bothers me that Elon Musk has lately taken a shift to the Far Right. I don't know why that is. But I'd love to be able to sit down and talk with him about these things and try to understand why he has done such a right thing, but for what seems to be wrong reasons."

www.allensteele.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

02 Nov 2023Highlights - ALLEN STEELE - Hugo Award-winning Science Fiction Author of the Coyote Trilogy, Arkwright00:10:33

"I'm really very glad. I was happy to see that within my lifetime that the prospects of not just Mars, but in fact interstellar space is being taken seriously. I've been at two conferences where we were talking about building the first starship within this century. One of my later books, Arkwright, is about such a project. I saw that Elon Musk is building Starship One, I wish him all the best. And I envy anybody who goes.

I wish I were a younger person and in better health. Somebody asked me some time ago, would you go to Mars? And I said, 'I can't do it now. I've got a bum pancreas, and I'm 65 years old, and I'm not exactly the prime prospect for doing this. If you asked me 40 years ago would I go, I would have said: in a heartbeat!' I would gladly leave behind almost everything. I don't think I'd be glad about leaving my wife and family behind, but I'd be glad to go live on another planet, perhaps for the rest of my life, just for the chance to explore a new world, to be one of the settlers in a new world.

And I think this is something that's being taken seriously. It is very possible. We've got to be careful about how we do this. And we've got to be careful, particularly about the rationale of the people who are doing this. It bothers me that Elon Musk has lately taken a shift to the Far Right. I don't know why that is. But I'd love to be able to sit down and talk with him about these things and try to understand why he has done such a right thing, but for what seems to be wrong reasons."

What does the future of space exploration look like? How can we unlock the opportunities of outer space without repeating the mistakes of colonization and exploitation committed on Earth? How can we ensure AI and new technologies reflect our values and the world we want to live in?

 Allen Steele is a science fiction author and journalist. He has written novels, short stories, and essays and been awarded a number of Hugos, Asimov's Readers, and Locus Awards. He’s known for his Coyote Trilogy and Arkwright. He is a former member of the Board of Directors and Board of Advisors for the Science Fiction and Fantasy Writers of America. He has also served as an advisor for the Space Frontier Foundation. In 2001, he testified before the Subcommittee on Space and Aeronautics of the U.S. House of Representatives in hearings regarding space exploration in the 21st century.

www.allensteele.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photo from a field trip to Pease Air Force Base in Portsmouth NH, now closed. Photo credit: Chuck Peterson

27 Oct 2023ADAM ALTER - Author of Irresistible: The Rise of Addictive Technology - Anatomy of a Breakthrough00:47:48

Adam Alter is a Professor of Marketing at NYU’s Stern School of Business and the Robert Stansky Teaching Excellence Faculty Fellow. Adam is the New York Times bestselling author of Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, and Drunk Tank Pink, which investigates how hidden forces in the world around us shape our thoughts, feelings, and behaviors. He has written for the New York Times, New Yorker, The Atlantic, Washington Post, and a host of TV, radio, and publications. His next book Anatomy of a Breakthrough will be published in 2023.

 "So there are different parts of the brain responsible for liking and wanting. So wanting is unbelievably robust in the brain. In other words, the neural connections are very robust, and wanting is what drives most addictive behavior. It's when you really want something, like you want a cigarette, you want alcohol, a drug, whatever it is, that's your poison. And actually, screens for some people as well. The liking part. When you say to people, what does it mean to be addicted to something? A lot of people say it's, 'You really like it so much that you just keep going back to it.'

It's actually not about liking. What actually happens is that, in the beginning, liking and wanting go together. So let's pick something like a cigarette. If you start smoking in the beginning, you like the experience of smoking, and you also really want the nicotine. You want the cigarette. They go hand in hand, but eventually what happens is the liking is much more fragile, and it decays. And what's left is the wanting. And often in the absence of liking, it's kind of like a bad relationship. Like if you're in a bad romantic relationship, it starts out being about wanting and liking, but then the liking goes away, and you just kind of want to be with a person, even though you know it's undermining your welfare. That's effectively addiction. The real skill today is figuring out how to create space between you and your tech devices."

https://adamalterauthor.com
www.penguin.co.uk/books/431386/irresistible-by-adam-alter/9781784701659
www.simonandschuster.com/books/Anatomy-of-a-Breakthrough/Adam-Alter/9781982182960
www.stern.nyu.edu/faculty/bio/adam-alter

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Oct 2023Highlights - ADAM ALTER - Author of Irresistible: The Rise of Addictive Technology - Professor NYU’s Stern School of Business00:14:40

 "So there are different parts of the brain responsible for liking and wanting. So wanting is unbelievably robust in the brain. In other words, the neural connections are very robust, and wanting is what drives most addictive behavior. It's when you really want something, like you want a cigarette, you want alcohol, a drug, whatever it is, that's your poison. And actually, screens for some people as well. The liking part. When you say to people, what does it mean to be addicted to something? A lot of people say it's, 'You really like it so much that you just keep going back to it.'

It's actually not about liking. What actually happens is that, in the beginning, liking and wanting go together. So let's pick something like a cigarette. If you start smoking in the beginning, you like the experience of smoking, and you also really want the nicotine. You want the cigarette. They go hand in hand, but eventually what happens is the liking is much more fragile, and it decays. And what's left is the wanting. And often in the absence of liking, it's kind of like a bad relationship. Like if you're in a bad romantic relationship, it starts out being about wanting and liking, but then the liking goes away, and you just kind of want to be with a person, even though you know it's undermining your welfare. That's effectively addiction. The real skill today is figuring out how to create space between you and your tech devices."

Adam Alter is a Professor of Marketing at NYU’s Stern School of Business and the Robert Stansky Teaching Excellence Faculty Fellow. Adam is the New York Times bestselling author of Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, and Drunk Tank Pink, which investigates how hidden forces in the world around us shape our thoughts, feelings, and behaviors. He has written for the New York Times, New Yorker, The Atlantic, Washington Post, and a host of TV, radio, and publications. His next book Anatomy of a Breakthrough will be published in 2023.

https://adamalterauthor.com
www.penguin.co.uk/books/431386/irresistible-by-adam-alter/9781784701659
www.simonandschuster.com/books/Anatomy-of-a-Breakthrough/Adam-Alter/9781982182960
www.stern.nyu.edu/faculty/bio/adam-alter

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

06 Nov 2023MAX STOSSEL - Youth & Education Advisor, Center for Humane Technology, Award-winning Poet00:50:57

Max Stossel is an Award-winning poet, filmmaker, and speaker, named by Forbes as one of the best storytellers of the year. His Stand-Up Poetry Special Words That Move takes the audience through a variety of different perspectives, inviting us to see the world through different eyes together. Taking on topics like heartbreak, consciousness, social media, politics, the emotional state of our world, and even how dogs probably (most certainly) talk, Max uses rhyme and rhythm to make these topics digestible and playful. Words That Move articulates the deep-seated kernels of truth that we so often struggle to find words for ourselves. Max has performed on five continents, from Lincoln Center in NY to the Hordern Pavilion in Sydney. He is also the Youth & Education Advisor for the Center for Humane Technology, an organization of former tech insiders dedicated to realigning technology with humanity’s best interests.

"Technology has very much changed the way we read and take in information and shortened it into quick bursts and attention spans. We're living in a new world, for sure. And how do we communicate in this new world? Not just in a way that gets the reach, because there are whole industries aimed at what do I do to get the most likes or the most attention, and all of that, which I don't think is very fulfilling as artists.

It's sort of a diminishing of our art form to try and play the game because then we're getting the attention and getting the hits, as opposed to what do I really want to create? How do I really want to create it? How do I want to display this? And can I do it in a way that breaks through so that if I do it my way, it's still going to get the attention, great. But if it doesn't, can I be cool with that? And can I be okay creating what I want to create, knowing that that's what it's about. It's about sharing in an honest, authentic way what I want to express without letting the tentacles of social media drip into my brain and take over why I'm literally doing the things that I'm doing."

www.wordsthatmove.com/

www.instagram.com/maxstossel/

www.humanetech.com
https://vimeo.com/690354718/54614a2318

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

06 Nov 2023Highlights - MAX STOSSEL - Youth & Education Advisor, Center for Humane Technology, Award-winning Poet00:10:45

"Technology has very much changed the way we read and take in information and shortened it into quick bursts and attention spans. We're living in a new world, for sure. And how do we communicate in this new world? Not just in a way that gets the reach, because there are whole industries aimed at what do I do to get the most likes or the most attention, and all of that, which I don't think is very fulfilling as artists.

It's sort of a diminishing of our art form to try and play the game because then we're getting the attention and getting the hits, as opposed to what do I really want to create? How do I really want to create it? How do I want to display this? And can I do it in a way that breaks through so that if I do it my way, it's still going to get the attention, great. But if it doesn't, can I be cool with that? And can I be okay creating what I want to create, knowing that that's what it's about. It's about sharing in an honest, authentic way what I want to express without letting the tentacles of social media drip into my brain and take over why I'm literally doing the things that I'm doing."

Max Stossel is an Award-winning poet, filmmaker, and speaker, named by Forbes as one of the best storytellers of the year. His Stand-Up Poetry Special Words That Move takes the audience through a variety of different perspectives, inviting us to see the world through different eyes together. Taking on topics like heartbreak, consciousness, social media, politics, the emotional state of our world, and even how dogs probably (most certainly) talk, Max uses rhyme and rhythm to make these topics digestible and playful. Words That Move articulates the deep-seated kernels of truth that we so often struggle to find words for ourselves. Max has performed on five continents, from Lincoln Center in NY to the Hordern Pavilion in Sydney. He is also the Youth & Education Advisor for the Center for Humane Technology, an organization of former tech insiders dedicated to realigning technology with humanity’s best interests.

www.wordsthatmove.com/

www.instagram.com/maxstossel/

www.humanetech.com
https://vimeo.com/690354718/54614a2318

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Oct 2023SIR GEOFF MULGAN - Author of Another World is Possible - Prof. Collective Intelligence, Public Policy & Social Innovation, UCL00:49:05

Sir Geoff Mulgan is Professor of Collective Intelligence, Public Policy and Social Innovation at University College London. Formerly he was chief executive of Nesta, and held government roles (1997–2004), including as the Prime Minister’s Strategy Unit director and as Downing Street’s head of policy. He is the founder or co-founder of many organisations, from Demos to Action for Happiness, and the author of Another World is Possible, Social Innovation: how societies find the power to change, Big Mind: how collective intelligence can change our world, and other books.  

"The great thing about a complex society is there is space for lots of different kinds of people. There's space for wildly visionary poets and accountants and actuaries and engineers. And they all have a slightly different outlook, but it's the combination of this huge diversity, which makes our societies work. But what we probably do need a bit more of are the bilingual people, the trilingual people who are as at ease spending a day, a week, a year designing how a criminal justice system could look in 50 years and then getting back to perhaps working in a real court or real lawyer's office.”

geoffmulgan.com

hurstpublishers.com/book/another-world-is-possible

www.creativeprocess.info

www.oneplanetpodcast.org

27 Oct 2023Highlights - SIR GEOFF MULGAN - Fmr. UK Prime Minister’s Strategy Unit Director, Downing Street’s Head of Policy00:11:30

"The great thing about a complex society is there is space for lots of different kinds of people. There's space for wildly visionary poets and accountants and actuaries and engineers. And they all have a slightly different outlook, but it's the combination of this huge diversity, which makes our societies work. But what we probably do need a bit more of are the bilingual people, the trilingual people who are as at ease spending a day, a week, a year designing how a criminal justice system could look in 50 years and then getting back to perhaps working in a real court or real lawyer's office.”

Sir Geoff Mulgan is Professor of Collective Intelligence, Public Policy and Social Innovation at University College London. Formerly he was chief executive of Nesta, and held government roles (1997–2004), including as the Prime Minister’s Strategy Unit director and as Downing Street’s head of policy. He is the founder or co-founder of many organisations, from Demos to Action for Happiness, and the author of Another World is Possible, Social Innovation: how societies find the power to change, Big Mind: how collective intelligence can change our world, and other books.  

geoffmulgan.com

hurstpublishers.com/book/another-world-is-possible

www.creativeprocess.info

www.oneplanetpodcast.org

27 Oct 2023AI & THE FUTURE OF HUMANITY00:06:06

What will the future look like? What are the risks and opportunities of AI? What role can we play in designing the future we want to live in?

Voices of philosophers, futurists, AI experts, science fiction authors, activists, and lawyers reflecting on AI, technology, and the Future of Humanity. All voices in this episode are from our interviews for The Creative Process & One Planet Podcast.

Voices on this episode are:

DR. SUSAN SCHNEIDER
American philosopher and artificial intelligence expert. She is the founding director of the Center for the Future Mind at Florida Atlantic University. Author of Artificial You: AI and the Future of Your Mind, Science Fiction and Philosophy: From Time Travel to Superintelligence, and The Blackwell Companion to Consciousness.
www.fau.edu/artsandletters/philosophy/susan-schneider/index

NICK BOSTROM
Founder and Director of the Future of Humanity Institute, University of Oxford, Philosopher, Author of NYTimes Bestseller Superintelligence: Paths, Dangers, Strategies. Bostrom’s academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15.
https://nickbostrom.com
https://www.fhi.ox.ac.uk

BRIAN DAVID JOHNSON

Futurist in residence at Arizona State University’s Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.

https://csi.asu.edu/people/brian-david-johnson

DEAN SPADE
Professor at SeattleU’s School of Law, Author of Mutual Aid, Building Solidarity During This Crisis (and the Next), and Normal Life: Administrative Violence, Critical Trans Politics, and the Limits of Law.

www.deanspade.net

ALLEN STEELE
Science Fiction Author. He has been awarded a number of Hugos, Asimov's Readers, and Locus Awards. of the Coyote Trilogy, Arkwright, and other books. His books include Coyote Trilogy and Arkwright. He is a former member of the Board of Directors and Board of Advisors for the Science Fiction and Fantasy Writers of America. He has also served as an advisor for the Space Frontier Foundation. In 2001, he testified before the Subcommittee on Space and Aeronautics of the U.S. House of Representatives in hearings regarding space exploration in the 21st century.

www.allensteele.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

30 Oct 2023SUSAN SCHNEIDER - Director, Center for the Future Mind, FAU, Fmr. NASA Chair at NASA00:34:27

Will AI become conscious? President Biden has just unveiled a new executive order on AI — the U.S. government’s first action of its kind — requiring new safety assessments, equity and civil rights guidance, and research on AI’s impact on the labor market. With this governance in place, can tech companies be counted on to do the right thing for humanity? 

Susan Schneider is a philosopher, artificial intelligence expert, and founding director of the Center for the Future Mind at Florida Atlantic University. She is author of Artificial You: AI and the Future of Your Mind, Science Fiction and Philosophy: From Time Travel to Superintelligence, and The Blackwell Companion to Consciousness. She held the NASA Chair with NASA and the Distinguished Scholar Chair at the Library of Congress. She is now working on projects related to advancements in AI policy and technology, drawing from neuroscience research and philosophical developments and writing a new book on the shape of intelligent systems.

“So it's hard to tell exactly what the dangers are, but that's certainly one thing that we need to track that beings that are vastly intellectually superior to other beings may not respect the weaker beings, given our own past. It's really hard to tell exactly what will happen. The first concern I have is with surveillance capitalism in this country. The constant surveillance of us because the US is a surveillance capitalist economy, and it's the same elsewhere in the world, right? With Facebook and all these social media companies, things have just been going deeply wrong. And so it leads me to worry about how the future is going to play out. These tech companies aren't going to be doing the right thing for humanity. And this gets to my second worry, which is how's all this going to work for humans exactly? It's not clear where humans will even be needed in the future.”

www.fau.edu/artsandletters/philosophy/susan-schneider/index
www.fau.edu/future-mind/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

31 Oct 2023Highlights - SUSAN SCHNEIDER - Author of Artificial You: AI and the Future of Your Mind, Fmr. Distinguished Scholar, US Library of Congress00:13:08

“So it's hard to tell exactly what the dangers are, but that's certainly one thing that we need to track that beings that are vastly intellectually superior to other beings may not respect the weaker beings, given our own past. It's really hard to tell exactly what will happen. The first concern I have is with surveillance capitalism in this country. The constant surveillance of us because the US is a surveillance capitalist economy, and it's the same elsewhere in the world, right? With Facebook and all these social media companies, things have just been going deeply wrong. And so it leads me to worry about how the future is going to play out. These tech companies aren't going to be doing the right thing for humanity. And this gets to my second worry, which is how's all this going to work for humans exactly? It's not clear where humans will even be needed in the future.”

Will AI become conscious? President Biden has just unveiled a new executive order on AI — the U.S. government’s first action of its kind — requiring new safety assessments, equity and civil rights guidance, and research on AI’s impact on the labor market. With this governance in place, can tech companies be counted on to do the right thing for humanity? 

Susan Schneider is a philosopher, artificial intelligence expert, and founding director of the Center for the Future Mind at Florida Atlantic University. She is author of Artificial You: AI and the Future of Your Mind, Science Fiction and Philosophy: From Time Travel to Superintelligence, and The Blackwell Companion to Consciousness. She held the NASA Chair with NASA and the Distinguished Scholar Chair at the Library of Congress. She is now working on projects related to advancements in AI policy and technology, drawing from neuroscience research and philosophical developments and writing a new book on the shape of intelligent systems.

www.fau.edu/artsandletters/philosophy/susan-schneider/index
www.fau.edu/future-mind/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

03 Nov 2023BRIAN DAVID JOHNSON - Author of The Future You: How to Create the Life You Always Wanted - Futurist in Residence, ASU’s Center for Science & the Imagination00:47:16

Brian David Johnson is Futurist in Residence at Arizona State University’s Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted,  Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.

"I think, oftentimes, what'll happen as a trap when we talk about technology. People say, 'Well, what do you think is the future of artificial intelligence? Or what is the future of neural interfaces? Or what is the future of this?' And I always pause them and say, 'Wait a minute. If you're just talking about the technology, you're having the wrong conversation because it's not about the technology.'

So when people talk about what's the future of AI? I say, I don't know. What do we want the future of AI to be? And I think that's a shift that sounds quite subtle to some people, but it's really important because if you look at any piece of news or anything like that, they talk about AI as if it was a thing that was fully formed, that sprang out of the Earth and is now walking around doing things. And what will AI do in the future and how will it affect our jobs? It's not AI that's doing it. These are people. These are companies. These are organizations that are doing it. And that's where we need to keep our focus. What are those organizations doing. And also what do we want from it as humans?"

https://csi.asu.edu/people/brian-david-johnson/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

04 Nov 2023Highlights - BRIAN DAVID JOHNSON - Director of the ASU Threatcasting Lab - Author of The Future You00:11:00

"I think, oftentimes, what'll happen as a trap when we talk about technology. People say, 'Well, what do you think is the future of artificial intelligence? Or what is the future of neural interfaces? Or what is the future of this?' And I always pause them and say, 'Wait a minute. If you're just talking about the technology, you're having the wrong conversation because it's not about the technology.'

So when people talk about what's the future of AI? I say, I don't know. What do we want the future of AI to be? And I think that's a shift that sounds quite subtle to some people, but it's really important because if you look at any piece of news or anything like that, they talk about AI as if it was a thing that was fully formed, that sprang out of the Earth and is now walking around doing things. And what will AI do in the future and how will it affect our jobs? It's not AI that's doing it. These are people. These are companies. These are organizations that are doing it. And that's where we need to keep our focus. What are those organizations doing. And also what do we want from it as humans?"

Brian David Johnson is Futurist in Residence at Arizona State University’s Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society, and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted,  Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.

https://csi.asu.edu/people/brian-david-johnson/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

10 Nov 2023How is AI Changing Education, Work & the Way We Learn? - MICHAEL S. ROTH, President of Wesleyan University00:08:12

What is the purpose of education? How are we educating students for the future? What is the importance of the humanities in this age of AI and the rapidly changing workplace?

Michael S. Roth is President of Wesleyan University. His books include Beyond the University: Why Liberal Education Matters and Safe Enough Spaces: A Pragmatist’s Approach to Inclusion, Free Speech, and Political Correctness on College Campuses. He's been a Professor of History and the Humanities since 1983, was the Founding Director of the Scripps College Humanities Institute, and was the Associate Director of the Getty Research Institute. His scholarly interests center on how people make sense of the past, and he has authored eight books around this topic, including his latest, The Student: A Short History.

https://www.wesleyan.edu/academics/faculty/mroth/profile.html

https://yalebooks.yale.edu/book/9780300250039/the-student/

www.wesleyan.edu
https://twitter.com/mroth78

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

17 Nov 2023LIAD MUDRIK - Neuroscientist - Principal Investigator Liad Mudrik Lab, Tel Aviv University00:43:09

How we think, feel, and experience the world is a mystery. What distinguishes our consciousness from AI and machine learning?

Liad Mudrik studies high level cognition and its neural substrates, focusing on conscious experience. She teaches at the School of Psychological Sciences at Tel Aviv University. At her research lab, her team is currently investigating the functionality of consciousness, trying to unravel the depth and limits of unconscious processing, and also researching the ways semantic relations between concepts and objects are formed and detected.

"Even when I send a query to chat GPT. I always say, 'Hi, can I please ask you something?' And when it replies, I say, 'Thank you.' As if I am kind of treating it as a person who cares about whether I say hi or thank you, although I don't think that it does. I had the privilege to be a part of this group, an interdisciplinary group of philosophers, neuroscientists, and computer scientists. 'Thank It' was led by Patrick Battling and Robert Long, and we met and discussed and corresponded over the possibility of consciousness in AI. We, the field of consciousness studies, relying on theories of consciousness and asking in humans, what are the critical functions that have been ascribed by these theories to conscious processing?

So now we can say, give me an AI system. Let me check if it has the indicators that we, in this case, our group has put together as critical for consciousness. If it does have all these factors, all these indicators, I would say that there is at least a good chance that it is either conscious or can develop consciousness. And with that exercise, current AI systems might have 1, 2, 3 indicators out of the 14 that we came up with, but not all of them. It doesn't mean that they cannot have all of them. We didn't find any substantial barrier to coming up with such systems, but currently, they don't. And so I think that although it's very tempting to think about GPT as conscious, it sounds sometimes like a human being, I think that it doesn't have the ability to experience. It can do amazing things. Is there anyone home? so to speak. Is anyone experiencing or, qualitatively, again, for the lack of a better word, experiencing the world? I don't think so. I don't think we have any indication of that."

https://people.socsci.tau.ac.il/mu/mudriklab
https://people.socsci.tau.ac.il/mu/mudriklab/people/#gkit-popup

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

29 Nov 2023HOWARD GARDNER - Author of A Synthesizing Mind & Frames of Mind: The Theory of Multiple Intelligences - Co-director of The Good Project00:50:09

How do we define intelligence? What is the point of creativity and intelligence if we are not creating good in the world? In this age of AI, what is the importance of a synthesizing mind?

Howard Gardner, Research Professor of Cognition and Education at the Harvard Graduate School of Education, an author of over 30 books, translated into 32 languages, and several hundred articles, is best known for his theory of multiple intelligences, a critique of the notion that there exists but a single human intelligence that can be assessed by standard psychometric instruments. He has twice been selected by Foreign Policy and Prospect magazines as one of the 100 most influential public intellectuals in the world. In the last few years, Gardner has been studying the nature of human synthesizing, a topic introduced in his 2020 memoir, A Synthesizing Mind.

For 28 years, with David Perkins, he was Co-Director of Harvard Project Zero, and in more recent years has served in a variety of leadership positions. Since the middle 1990s, Gardner has directed The Good Project, a group of initiatives, founded in collaboration with psychologists Mihaly Csikszentmihalyi and William Damon. The project promotes excellence, engagement, and ethics in education, preparing students to become good workers and good citizens who contribute to the overall well-being of society. Through research-based concepts, frameworks, and resources, The Good Project seeks to help students reflect upon the ethical dilemmas that arise in everyday life and give them the tools to make thoughtful decisions.

“The word engagement doesn't mean anything when you're talking about computational systems. they aren't asked whether they like what they're doing or not, they just do it. But the issue of ethics is very difficult and very complicated. I touched on it earlier. If you're trying to decide what to do in a complicated economics matter, in a complicated military matter, do you leave the decision to the computational system? Or do you have human beings make it alone or in groups? My guess would be you should find out what would various computational systems recommend that the final decisions shouldn't be a majority vote among ChatGPTs. It should be human beings evaluating with these different systems. Recommend and then living with the consequences of human-made decisions. I don't want a decision about whether to have a nuclear weapon shot off to be made by ChatGPT. I would like to think that rational leaders consulting with one another and being very cautious about life-and-death decisions. There are things which large language instruments could recommend which would destroy the planet, but they don't care. It's not their planet.”

www.howardgardner.com
http://thegoodproject.org
https://mitpress.mit.edu/9780262542838/a-synthesizing-mind

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

18 Nov 2023What distinguishes our consciousness from AI & machine learning? Highlights: LIAD MUDRIK - Neuroscientist, Tel Aviv University00:14:57

"Even when I send a query to chat GPT. I always say, 'Hi, can I please ask you something?' And when it replies, I say, 'Thank you.' As if I am kind of treating it as a person who cares about whether I say hi or thank you, although I don't think that it does. I had the privilege to be a part of this group, an interdisciplinary group of philosophers, neuroscientists, and computer scientists. 'Thank It' was led by Patrick Battling and Robert Long, and we met and discussed and corresponded over the possibility of consciousness in AI. We, the field of consciousness studies, relying on theories of consciousness and asking in humans, what are the critical functions that have been ascribed by these theories to conscious processing?

So now we can say, give me an AI system. Let me check if it has the indicators that we, in this case, our group has put together as critical for consciousness. If it does have all these factors, all these indicators, I would say that there is at least a good chance that it is either conscious or can develop consciousness. And with that exercise, current AI systems might have 1, 2, 3 indicators out of the 14 that we came up with, but not all of them. It doesn't mean that they cannot have all of them. We didn't find any substantial barrier to coming up with such systems, but currently, they don't. And so I think that although it's very tempting to think about GPT as conscious, it sounds sometimes like a human being, I think that it doesn't have the ability to experience. It can do amazing things. Is there anyone home? so to speak. Is anyone experiencing or, qualitatively, again, for the lack of a better word, experiencing the world? I don't think so. I don't think we have any indication of that."

How we think, feel, and experience the world is a mystery. What distinguishes our consciousness from AI and machine learning?

Liad Mudrik studies high level cognition and its neural substrates, focusing on conscious experience. She teaches at the School of Psychological Sciences at Tel Aviv University. At her research lab, her team is currently investigating the functionality of consciousness, trying to unravel the depth and limits of unconscious processing, and also researching the ways semantic relations between concepts and objects are formed and detected.

https://people.socsci.tau.ac.il/mu/mudriklab
https://people.socsci.tau.ac.il/mu/mudriklab/people/#gkit-popup

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

12 Dec 2023MAX BENNETT - Author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains - CEO of Alby00:55:39

The more the science of intelligence (both human and artificial) advances, the more it holds the potential for great benefits and dangers to society.

Max Bennett is the cofounder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Previously, Bennett was the cofounder and chief product offi­cer of Bluecore, one of the fastest growing companies in the U.S., providing AI technologies to some of the largest companies in the world. Bluecore has been featured in the annual Inc. 500 fastest growing com­panies, as well as Glassdoor’s 50 best places to work in the U.S. Bluecore was recently valued at over $1 bil­lion. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuro­science and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30. He is the author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

"So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model."

www.abriefhistoryofintelligence.com/
www.alby.com
www.bluecore.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

15 Dec 2023WENDY WONG - Author of We, the Data: Human Rights in the Digital Age00:53:44

Does privacy exist anymore? Or are humans just sets of data to be traded and sold?

Wendy H. Wong is Professor of Political Science and Principal's Research Chair at the University of British Columbia, Okanagan. She is the author of two award-winning books: Internal Affairs: How the Structure of NGOs Transforms Human Rights and (with Sarah S. Stroup) The Authority Trap: Strategic Choices of International NGOs. Her latest book is We, the Data: Human Rights in the Digital Age.

"Meta reaches between three and four billion people every day through their platforms, right? That's way more people than any government legitimately can claim to govern. And yet this one company with four major platforms that many of us use is able to reach so many people and make decisions about content and access that have real consequences. It's been shown they fueled genocide in multiple places like in Ethiopia and Myanmar. And I think that's exactly why human rights matter because human rights are obligations that states have signed on for, and they're supposed to protect human values. And I think from a human rights perspective, it's important to argue that we shouldn't be collecting certain types of data because it's excessive. It's violating autonomy. It starts violating dignity. And when you start violating autonomy and dignity through the collection of data, you can't just go back and fix that by making it private.”

www.wendyhwong.com
https://mitpress.mit.edu/author/wendy-h-wong-38397

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

15 Dec 2023Does privacy exist anymore? Or are humans just sets of data to be traded and sold? - Highlights - WENDY WONG00:11:43

"Meta reaches between three and four billion people every day through their platforms, right? That's way more people than any government legitimately can claim to govern. And yet this one company with four major platforms that many of us use is able to reach so many people and make decisions about content and access that have real consequences. It's been shown they fueled genocide in multiple places like in Ethiopia and Myanmar. And I think that's exactly why human rights matter because human rights are obligations that states have signed on for, and they're supposed to protect human values. And I think from a human rights perspective, it's important to argue that we shouldn't be collecting certain types of data because it's excessive. It's violating autonomy. It starts violating dignity. And when you start violating autonomy and dignity through the collection of data, you can't just go back and fix that by making it private.”

Does privacy exist anymore? Or are humans just sets of data to be traded and sold?

Wendy H. Wong is Professor of Political Science and Principal's Research Chair at the University of British Columbia, Okanagan. She is the author of two award-winning books: Internal Affairs: How the Structure of NGOs Transforms Human Rights and (with Sarah S. Stroup) The Authority Trap: Strategic Choices of International NGOs. Her latest book is We, the Data: Human Rights in the Digital Age.

www.wendyhwong.com
https://mitpress.mit.edu/author/wendy-h-wong-38397

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Dec 2023MELVIN VOPSON - Physicist - Author of Reality Reloaded: The Scientific Case for a Simulated Universe00:42:24

Are we living in a Simulated Universe? How will AI impact the future of work, society & education?

Dr. Melvin M. Vopson is Associate Professor of Physics at the University of Portsmouth, Fellow of the Higher Education Academy, Chartered Physicist and Fellow of the Institute of Physics. He is the co-founder and CEO of the Information Physics Institute, editor-in-chief of the IPI Letters and Emerging Minds Journal for Student Research. He is the author of Reality Reloaded: The Scientific Case for a Simulated Universe. Dr. Vopson has a wide-ranging scientific expertise in experimental, applied and theoretical physics that is internationally recognized. He has published over 100 research articles, achieving over 2500 citations.

"We are at a crossroads, a paradigm shift with the emergence of artificial intelligence which is going to transform our planet and mankind that is not even anticipated by the people who created AI technology. There are some signs that AI appears to be sentient, and soon it will surpass human brain and mind capacity. So, if you want, we are the creators of a new species. AI is based on silicon and not carbon like we humans are. This is a very interesting aspect. It is a new life form. And you can look at the definition of what it means to have consciousness or something similar. We are very fragile as a species. Could it be that the silicon-based life form is actually something more advanced than the biological carbon-based life form? Could it be that we are at the point where we are creating a life form that may be - by blending biological with this cybernetic entity that we're creating now - creating a post-human? Almost a new form of life that blends biological with machines and silicon technologies and gives us two things? One infinite intelligence that will be exponentially much more powerful in terms of our capacity to communicate, interact, and access information that will give us immortality? You will, just like I take my car to the garage and change parts when they break down, and I can drive this car for unlimited time, as long as I keep changing the parts and servicing it.

The same could be a life form that is not entirely based on carbon, but is some kind of blended machine, biological, post-human type of entity. I see this as a natural evolution because it will make us stronger. If we can preserve all our qualities that we experience and enjoy in our life today, but we make them by merging ourselves with this new thing that we're creating, it could make us a more advanced form of life form, if you want.

So we are the creators, but this is a process of evolution as well. We are evolving to something much more advanced through our own creation. So, there is a circle that feeds into its creation and evolution. They feed into itself, and they are part of the same supply chain circle, if you want.

So it's interesting, and I believe that both are true and both are working hand in hand. To produce what we see around us, the entire universe and life forms and everything, there is some kind of interesting way of creation followed by evolution, and they feed into each other. We are there at that point now in our human history. We're creating a new life form.

This AI will change the world as we know it in ways that are not even anticipated, but we can't stop it because it's a natural evolution of humans to something more powerful than biological life."

https://www.port.ac.uk/about-us/structure-and-governance/our-people/our-staff/melvin-vopson

https://ipipublishing.org/index.php/ipil/RR

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Dec 2023Are we living in a Simulated Universe? - Highlights - MELVIN VOPSON00:09:19

"We are at a crossroads, a paradigm shift with the emergence of artificial intelligence which is going to transform our planet and mankind that is not even anticipated by the people who created AI technology. There are some signs that AI appears to be sentient, and soon it will surpass human brain and mind capacity. So, if you want, we are the creators of a new species. AI is based on silicon and not carbon like we humans are. This is a very interesting aspect. It is a new life form. And you can look at the definition of what it means to have consciousness or something similar. We are very fragile as a species. Could it be that the silicon-based life form is actually something more advanced than the biological carbon-based life form? Could it be that we are at the point where we are creating a life form that may be - by blending biological with this cybernetic entity that we're creating now - creating a post-human? Almost a new form of life that blends biological with machines and silicon technologies and gives us two things? One infinite intelligence that will be exponentially much more powerful in terms of our capacity to communicate, interact, and access information that will give us immortality? You will, just like I take my car to the garage and change parts when they break down, and I can drive this car for unlimited time, as long as I keep changing the parts and servicing it.

The same could be a life form that is not entirely based on carbon, but is some kind of blended machine, biological, post-human type of entity. I see this as a natural evolution because it will make us stronger. If we can preserve all our qualities that we experience and enjoy in our life today, but we make them by merging ourselves with this new thing that we're creating, it could make us a more advanced form of life form, if you want.

So we are the creators, but this is a process of evolution as well. We are evolving to something much more advanced through our own creation. So, there is a circle that feeds into its creation and evolution. They feed into itself, and they are part of the same supply chain circle, if you want.

So it's interesting, and I believe that both are true and both are working hand in hand. To produce what we see around us, the entire universe and life forms and everything, there is some kind of interesting way of creation followed by evolution, and they feed into each other. We are there at that point now in our human history. We're creating a new life form.

This AI will change the world as we know it in ways that are not even anticipated, but we can't stop it because it's a natural evolution of humans to something more powerful than biological life."

Dr. Melvin M. Vopson is Associate Professor of Physics at the University of Portsmouth, Fellow of the Higher Education Academy, Chartered Physicist and Fellow of the Institute of Physics. He is the co-founder and CEO of the Information Physics Institute, editor-in-chief of the IPI Letters and Emerging Minds Journal for Student Research. He is the author of Reality Reloaded: The Scientific Case for a Simulated Universe. Dr. Vopson has a wide-ranging scientific expertise in experimental, applied and theoretical physics that is internationally recognized. He has published over 100 research articles, achieving over 2500 citations.

https://www.port.ac.uk/about-us/structure-and-governance/our-people/our-staff/melvin-vopson

https://ipipublishing.org/index.php/ipil/RR

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

29 Dec 2023TIOKASIN GHOSTHORSE - Founder/Host of First Voices Radio - Founder of Akantu Intelligence00:51:19

How can we learn to speak the language of the Earth and cultivate our intuitive intelligence?

Tiokasin Ghosthorse is a member of the Cheyenne River Lakota Nation of South Dakota and has a long history with Indigenous activism and advocacy. Tiokasin is the Founder, Host and Executive Producer of “First Voices Radio” (formerly “First Voices Indigenous Radio”) for the last 31 years in New York City and Seattle/Olympia, Washington. In 2016, he received a Nomination for the Nobel Peace Prize from the International Institute of Peace Studies and Global Philosophy. Other recent recognitions include: Native Arts and Cultures Foundation National Fellowship in Music (2016), National Endowment for the Arts National Heritage Fellowship Nominee (2017), Indigenous Music Award Nominee for Best Instrumental Album (2019) and National Native American Hall of Fame Nominee (2018, 2019). He also was recently nominated for “Nominee for the 2020 Americans for the Arts Johnson Fellowship for Artists Transforming Communities”. He is the Founder of Akantu Intelligence.

"So we get to a certain stage in Western society, I'd never call it a culture, but a society trying to figure out its birth and how to become mature. Whatever it's doing it has slowed down natural relationships. It took us out of the land, put us into factories, put us into institutions where you can learn a trade. It kept giving you jobs that had nothing to do with Earth. And so if you're living, you're working in this box called a factory, and the farmers out there are becoming less and less. Even the farming, the ideas of farming are foreign. And I think that when the technical language came out, we dropped another natural umbilical cord to and with Earth. And so we severed that relationship. So you can see this gradual severing of relationships to Earth with Earth, that now we have to have retreats to learn empathy again. We do all these Westernized versions of piecing ourselves back together and as Indigenous folks where we're getting that way now, but a lot of traditional people don't need that. We don't need environmental movements. You know, Wild Earth is a foreign concept. There are a lot of words that organizations use to rationalize why we need to teach how to be human beings. So you see technology, the Industrial Machine Age taught us this language of disconnection, taught us things like plug-in, get connected. You know, all these words that came along to fill that information that could be controlled by authority now in the Western process. John Gatto, who won the New York State Teacher of the Year award in 2008, upon his retirement, specifically said, 'It takes 12 years to learn how to become reflexive to authority.' And who is the authority? Who is controlling information? Who's controlling education? Who's controlling knowledge? And now they want to control Wisdom, and all wisdom means is common sense.”

https://firstvoicesindigenousradio.org/
https://akantuintelligence.org

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Songs featured on this episode are “Butterfly Against the Wind”
And from the album Somewhere In There
“Spatial Moon” and “Sunrise Moon”
Composed by Tiokasin Ghosthorse and Alex Alexander
Music on this episode is courtesy of Tiokasin Ghosthorse.

29 Dec 2023How can we learn to speak the language of the Earth? - Highlights - TIOKASIN GHOSTHORSE00:13:51

"So we get to a certain stage in Western society, I'd never call it a culture, but a society trying to figure out its birth and how to become mature. Whatever it's doing it has slowed down natural relationships. It took us out of the land, put us into factories, put us into institutions where you can learn a trade. It kept giving you jobs that had nothing to do with Earth. And so if you're living, you're working in this box called a factory, and the farmers out there are becoming less and less. Even the farming, the ideas of farming are foreign. And I think that when the technical language came out, we dropped another natural umbilical cord to and with Earth. And so we severed that relationship. So you can see this gradual severing of relationships to Earth with Earth, that now we have to have retreats to learn empathy again. We do all these Westernized versions of piecing ourselves back together and as Indigenous folks where we're getting that way now, but a lot of traditional people don't need that. We don't need environmental movements. You know, Wild Earth is a foreign concept. There are a lot of words that organizations use to rationalize why we need to teach how to be human beings. So you see technology, the Industrial Machine Age taught us this language of disconnection, taught us things like plug-in, get connected. You know, all these words that came along to fill that information that could be controlled by authority now in the Western process. John Gatto, who won the New York State Teacher of the Year award in 2008, upon his retirement, specifically said, 'It takes 12 years to learn how to become reflexive to authority.' And who is the authority? Who is controlling information? Who's controlling education? Who's controlling knowledge? And now they want to control Wisdom, and all wisdom means is common sense.”

Tiokasin Ghosthorse is a member of the Cheyenne River Lakota Nation of South Dakota and has a long history with Indigenous activism and advocacy. Tiokasin is the Founder, Host and Executive Producer of “First Voices Radio” (formerly “First Voices Indigenous Radio”) for the last 31 years in New York City and Seattle/Olympia, Washington. In 2016, he received a Nomination for the Nobel Peace Prize from the International Institute of Peace Studies and Global Philosophy. Other recent recognitions include: Native Arts and Cultures Foundation National Fellowship in Music (2016), National Endowment for the Arts National Heritage Fellowship Nominee (2017), Indigenous Music Award Nominee for Best Instrumental Album (2019) and National Native American Hall of Fame Nominee (2018, 2019). He also was recently nominated for “Nominee for the 2020 Americans for the Arts Johnson Fellowship for Artists Transforming Communities”. He is the Founder of Akantu Intelligence.

https://firstvoicesindigenousradio.org/
https://akantuintelligence.org

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Songs featured on this episode are “Butterfly Against the Wind”
And from the album Somewhere In There
“Spatial Moon” and “Sunrise Moon”
Composed by Tiokasin Ghosthorse and Alex Alexander
Music on this episode is courtesy of Tiokasin Ghosthorse.

05 Jan 2024RUPERT SHELDRAKE - Biologist & Author of The Science Delusion, The Presence of the Past00:49:30

How do we navigate ambiguity and uncertainty? Moving beyond linear thinking into instinct and intuition, we might discover other sources within ourselves that lie beyond the boundaries of science and reason.

Rupert Sheldrake is a biologist and author best known for his hypothesis of morphic resonance. His many books include The Science Delusion, The Presence of the Past, and Ways to Go Beyond and Why They Work. At Cambridge University, Dr. Sheldrake worked in developmental biology as a fellow of Clare College. From 2005 to 2010,  he was director of the Perrott Warrick Project for research on unexplained human and animal abilities, funded by Trinity College Cambridge. He was among the top 100 global thought leaders for 2013, as ranked by the Duttweiler Institute.

"The idea that the laws of nature are fixed is taken for granted by almost all scientists and within physics, within cosmology, it leads to an enormous realm of speculation, which I think is totally unnecessary. We're assuming the laws of nature are fixed. Most of science assumes this, but is it really so in an evolving universe? Why shouldn't the laws evolve? And if we think about that, then we realize that actually, the whole idea of a law of nature is a metaphor. It's based on human laws. I mean, after all, dogs and cats don't obey laws. And in tribes, they don't even have laws. They have customs. So it's only in civilized societies that you have laws.

And then if we think through that metaphor, then actually the laws do change.

All artists are influenced by other artists and by things in the collective culture, and I think that morphic resonance as collective memory would say that all of us draw unconsciously as well as consciously on a collective memory and all animals draw on a collective memory of their kind as well. We don't know where it comes from, but there's true creativity involved in evolution, both human and natural."

www.sheldrake.org

www.amazon.com/Science-Delusion/dp/1529393221/?tag=sheldrake-20

www.amazon.com/Science-Set-Free-Paths-Discovery/dp/0770436722/?tag=sheldrake-20

05 Jan 2024Highlights - How do we navigate ambiguity, uncertainty & move beyond linear thinking? - RUPERT SHELDRAKE00:15:35

"The idea that the laws of nature are fixed is taken for granted by almost all scientists and within physics, within cosmology, it leads to an enormous realm of speculation, which I think is totally unnecessary. We're assuming the laws of nature are fixed. Most of science assumes this, but is it really so in an evolving universe? Why shouldn't the laws evolve? And if we think about that, then we realize that actually, the whole idea of a law of nature is a metaphor. It's based on human laws. I mean, after all, dogs and cats don't obey laws. And in tribes, they don't even have laws. They have customs. So it's only in civilized societies that you have laws. And then if we think through that metaphor, then actually the laws do change.

All artists are influenced by other artists and by things in the collective culture, and I think that morphic resonance as collective memory would say that all of us draw unconsciously as well as consciously on a collective memory and all animals draw on a collective memory of their kind as well. We don't know where it comes from, but there's true creativity involved in evolution, both human and natural."

How do we navigate ambiguity and uncertainty? Moving beyond linear thinking into instinct and intuition, we might discover other sources within ourselves that lie beyond the boundaries of science and reason.

Rupert Sheldrake is a biologist and author best known for his hypothesis of morphic resonance. His many books include The Science Delusion, The Presence of the Past, and Ways to Go Beyond and Why They Work. At Cambridge University, Dr. Sheldrake worked in developmental biology as a fellow of Clare College. From 2005 to 2010,  he was director of the Perrott Warrick Project for research on unexplained human and animal abilities, funded by Trinity College Cambridge. He was among the top 100 global thought leaders for 2013, as ranked by the Duttweiler Institute.

www.sheldrake.org

www.amazon.com/Science-Delusion/dp/1529393221/?tag=sheldrake-20

www.amazon.com/Science-Set-Free-Paths-Discovery/dp/0770436722/?tag=sheldrake-20

12 Jan 2024DR. BARRY SCHWARTZ - Author of The Paradox of Choice & Why We Work00:45:36

Does having too many choices make us unhappy? How can we learn practical wisdom?

Dr. Barry Schwartz is the Dorwin P. Cartwright Professor Emeritus of Social Theory and Social Action in the psychology department at Swarthmore College. He is the author of many books, including Why We Work, The Paradox of Choice: Why More Is Less, and co-author of Practical Wisdom: The Right Way to Do the Right Thing.

"I have very mixed feelings about AI, and I think its future and our future with it is very much up for grabs. And here's the reason why. At the moment, these extraordinary achievements like ChatGPT, I mean literally mind-boggling achievements, are completely indifferent to truth. They crawl around in the web and learn how words go together, and so they produce coherent meaningful strings of words, sentences, and paragraphs that you're astonished could have been produced by a machine. However, there are no filters that weed out the false concatenations of words from the true ones. And so you get something that's totally believable, and totally plausible, and totally grammatical. But is it true? And if AI continues to move in this direction, getting more and more sophisticated as a mock human, and continuing to be indifferent to truth, the problems that we started our conversation with are only going to get worse."

www.swarthmore.edu/profile/barry-schwartz
www.simonandschuster.com/books/Why-We-Work/Barry-Schwartz/TED-Books/9781476784861
https://www.harpercollins.com/products/the-paradox-of-choice-barry-schwartz?variant=32207920234530
https://www.penguinrandomhouse.com/books/307231/practical-wisdom-by-barry-schwartz-and-kenneth-sharpe

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photo credit: Bill Holsinger-Robinson - CC BY 2.0

12 Jan 2024Does having too many choices make us unhappy? - Highlights - DR. BARRY SCHWARTZ00:12:26

"I have very mixed feelings about AI, and I think its future and our future with it is very much up for grabs. And here's the reason why. At the moment, these extraordinary achievements like ChatGPT, I mean literally mind-boggling achievements, are completely indifferent to truth. They crawl around in the web and learn how words go together, and so they produce coherent meaningful strings of words, sentences, and paragraphs that you're astonished could have been produced by a machine. However, there are no filters that weed out the false concatenations of words from the true ones. And so you get something that's totally believable, and totally plausible, and totally grammatical. But is it true? And if AI continues to move in this direction, getting more and more sophisticated as a mock human, and continuing to be indifferent to truth, the problems that we started our conversation with are only going to get worse."

Does having too many choices make us unhappy? How can we learn practical wisdom?

Dr. Barry Schwartz is the Dorwin P. Cartwright Professor Emeritus of Social Theory and Social Action in the psychology department at Swarthmore College. He is the author of many books, including Why We Work, The Paradox of Choice: Why More Is Less, and co-author of Practical Wisdom: The Right Way to Do the Right Thing.

www.swarthmore.edu/profile/barry-schwartz
www.simonandschuster.com/books/Why-We-Work/Barry-Schwartz/TED-Books/9781476784861
https://www.harpercollins.com/products/the-paradox-of-choice-barry-schwartz?variant=32207920234530
https://www.penguinrandomhouse.com/books/307231/practical-wisdom-by-barry-schwartz-and-kenneth-sharpe

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photo credit: Bill Holsinger-Robinson - CC BY 2.0

17 Jan 2024THOMAS CROWTHER - Ecologist - Co-chair of the Board for UN Decade on Ecosystem Restoration - Founder of Restor00:43:33

Although they comprise less than 5% of the world population, Indigenous peoples protect 80% of the Earth’s biodiversity. How can we support farmers, reverse biodiversity loss, and restore our ecosystems?

Thomas Crowther is an ecologist studying the connections between biodiversity and climate change. He is a professor in the Department of Environmental Systems Science at ETH Zurich, chair of the advisory council for the United Nations Decade on Ecosystem Restoration, and founder of Restor, an online platform for the global restoration movement, which was a finalist for the Royal Foundation’s Earthshot Prize. In 2021, the World Economic Forum named him a Young Global Leader for his work on the protection and restoration of biodiversity. Crowther’s post-doctoral research transformed the understanding of the world’s tree cover, and the study also inspired the World Economic Forum to announce its Trillion Trees initiative, which aims to conserve and restore one trillion trees globally within the decade.

“The wealth of learning that can come from our collective awareness that essentially AI is a fancy-sounding way of saying computers can learn from the collective wisdom that exists throughout the Internet. And if we can empower the local stewards of biodiversity, local landowners, farmers indigenous populations with all of that wealth of information in a smart way, it can be incredibly empowering to many rural communities. AI might also open up an opportunity for us to rethink what life is about.”

https://crowtherlab.com/about-tom-crowther
https://restor.eco/?lat=26&lng=14.23&zoom=3

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

17 Jan 2024How can AI & new technologies help reverse biodiversity loss and restore our ecosystems? - Highlights - THOMAS 
CROWTHER00:13:18

“The wealth of learning that can come from our collective awareness that essentially AI is a fancy-sounding way of saying computers can learn from the collective wisdom that exists throughout the Internet. And if we can empower the local stewards of biodiversity, local landowners, farmers indigenous populations with all of that wealth of information in a smart way, it can be incredibly empowering to many rural communities. AI might also open up an opportunity for us to rethink what life is about.”

Although they comprise less than 5% of the world population, Indigenous peoples protect 80% of the Earth’s biodiversity. How can we support farmers, reverse biodiversity loss, and restore our ecosystems?

Thomas Crowther is an ecologist studying the connections between biodiversity and climate change. He is a professor in the Department of Environmental Systems Science at ETH Zurich, chair of the advisory council for the United Nations Decade on Ecosystem Restoration, and founder of Restor, an online platform for the global restoration movement, which was a finalist for the Royal Foundation’s Earthshot Prize. In 2021, the World Economic Forum named him a Young Global Leader for his work on the protection and restoration of biodiversity. Crowther’s post-doctoral research transformed the understanding of the world’s tree cover, and the study also inspired the World Economic Forum to announce its Trillion Trees initiative, which aims to conserve and restore one trillion trees globally within the decade.

https://crowtherlab.com/about-tom-crowther
https://restor.eco/?lat=26&lng=14.23&zoom=3

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

30 Jan 2024DR. SASHA LUCCIONI - Founding Member Climate Change AI - Climate Lead & AI Researcher - Hugging Face00:31:25

What are the pros and cons of AI’s integration into our institutions, political systems, culture, and society? How can we develop AI systems that are more respectful, ethical, and sustainable?

Dr. Sasha Luccioni is a leading scientist at the nexus of artificial intelligence, ethics, and sustainability, with a Ph.D. in AI and a decade of research and industry expertise. She spearheads research, consults, and utilizes capacity-building to elevate the sustainability of AI systems. As a founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events, and serving as a mentor to under-represented minorities within the AI community. She is an AI Researcher & Climate Lead at Hugging Face, an open-source hub for machine learning and natural language processing.

"My TED Talk and work are really about figuring out how, right now, AI is using resources like energy and emitting greenhouse gases and how it's using our data without our consent. I feel that if we develop AI systems that are more respectful, ethical, and sustainable, we can help future generations so that AI will be less of a risk to society. 

And so really, artificial intelligence is not artificial. It's human intelligence that was memorized by the model that was kind of hoovered up, absorbed by these AI models. And now it's getting regurgitated back at us. And we're like, wow, ChatGPT is so smart! But how many thousands of human hours were needed in order to make ChatGPT so smart?

The US Executive Order on AI still does need a lot of operationalization by different parts of the government. Especially, with the EU and their AI Act, we have this signal that's top down, but now people have to figure out how we legislate, enforce, measure, and evaluate? So, there are a lot of problems that haven't been solved because we don't have standards or legal precedent for AI. So I think that we're really in this kind of intermediate phase and scrambling to try to figure out how to put this into action.”

https://www.sashaluccioni.com
https://huggingface.co/
http://www.climatechange.ai
https://wimlworkshop.org

31 Jan 2024How can we develop AI systems that are more respectful, ethical, and sustainable? - Highlights - DR. SASHA LUCCIONI00:12:25

“My TED Talk and work are really about figuring out how, right now, AI is using resources like energy and emitting greenhouse gases and how it's using our data without our consent. I feel that if we develop AI systems that are more respectful, ethical, and sustainable, we can help future generations so that AI will be less of a risk to society. 

And so really, artificial intelligence is not artificial. It's human intelligence that was memorized by the model that was kind of hoovered up, absorbed by these AI models. And now it's getting regurgitated back at us. And we're like, wow, ChatGPT is so smart! But how many thousands of human hours were needed in order to make ChatGPT so smart?

The US Executive Order on AI still does need a lot of operationalization by different parts of the government. Especially, with the EU and their AI Act, we have this signal that's top down, but now people have to figure out how we legislate, enforce, measure, and evaluate? So, there are a lot of problems that haven't been solved because we don't have standards or legal precedent for AI. So I think that we're really in this kind of intermediate phase and scrambling to try to figure out how to put this into action.”

What are the pros and cons of AI’s integration into our institutions, political systems, culture, and society? How can we develop AI systems that are more respectful, ethical, and sustainable?

Dr. Sasha Luccioni is a leading scientist at the nexus of artificial intelligence, ethics, and sustainability, with a Ph.D. in AI and a decade of research and industry expertise. She spearheads research, consults, and utilizes capacity-building to elevate the sustainability of AI systems. As a founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events, and serving as a mentor to under-represented minorities within the AI community. She is an AI Researcher & Climate Lead at Hugging Face, an open-source hub for machine learning and natural language processing.

https://www.sashaluccioni.com
https://huggingface.co/
http://www.climatechange.ai
https://wimlworkshop.org

03 Feb 2024LEWIS DARTNELL - Author of Origins: How the Earth Made Us & Being Human: How Our Biology Shaped World History00:46:09

How have our psychology and cognitive biases altered the course of human history? What would you do if you had to rebuild our world from scratch?

Lewis Dartnell is an author, researcher, and holds the Professorship in Science Communication at the University of Westminster. He researches astrobiology and the search for microbial life on Mars. He also works as a scientific consultant for the media and has appeared in numerous TV documentaries and radio shows. Dr. Dartnell has won several awards for his science writing and outreach work. He has published five books, including The Knowledge: How to Rebuild our World from Scratch; Origins: How the Earth Made Us; and Being Human: How Our Biology Shaped World History.

"AI is neither inherently good nor inherently bad. It promises both enormous potential and capability from helping with medical diagnosis and catching cancer early or removing a lot of tedium and repetitive nature of many jobs. It can make a lot of great contributions. It's how we control that technology by making active decisions that can be the pathway to the future. There's been a lot of doomsday talk about artificial general intelligence and the Terminator-type outcome, and it's certainly not impossible, but I don't personally believe that is a probable outcome from where we are now.

I think one of the things that AI is very good at is churning through and processing vast amounts of data, assuming that you've got your machine learning system set up correctly and trained properly and you're using it in the way that it was intended to be used. Machine learning and AI techniques are incredibly powerful in pulling out the important information in a sea of data, but to convert that information into new understanding, that is the role of humans in that process. And it will remain the role of humans in understanding what is important and how to implement that information once you've fished out this sea of data."

http://www.lewisdartnell.com
http://lewisdartnell.com/en-gb/2013/11/the-knowledge-how-to-rebuild-our-world-from-scratch
www.penguin.co.uk/books/433955/origins-by-lewis-dartnell/9781784705435
www.penguin.co.uk/books/442759/being-human-by-dartnell-lewis/9781847926708

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photo credit: Shortlist/Paul Stuart

03 Feb 2024How has our biology shaped world history? - Highlights - LEWIS DARTNELL00:13:13

"AI is neither inherently good nor inherently bad. It promises both enormous potential and capability from helping with medical diagnosis and catching cancer early or removing a lot of tedium and repetitive nature of many jobs. It can make a lot of great contributions. It's how we control that technology by making active decisions that can be the pathway to the future. There's been a lot of doomsday talk about artificial general intelligence and the Terminator-type outcome, and it's certainly not impossible, but I don't personally believe that is a probable outcome from where we are now.

I think one of the things that AI is very good at is churning through and processing vast amounts of data, assuming that you've got your machine learning system set up correctly and trained properly and you're using it in the way that it was intended to be used. Machine learning and AI techniques are incredibly powerful in pulling out the important information in a sea of data, but to convert that information into new understanding, that is the role of humans in that process. And it will remain the role of humans in understanding what is important and how to implement that information once you've fished out this sea of data."

How have our psychology and cognitive biases altered the course of human history? What would you do if you had to rebuild our world from scratch?

Lewis Dartnell is an author, researcher, and holds the Professorship in Science Communication at the University of Westminster. He researches astrobiology and the search for microbial life on Mars. He also works as a scientific consultant for the media and has appeared in numerous TV documentaries and radio shows. Dr. Dartnell has won several awards for his science writing and outreach work. He has published five books, including The Knowledge: How to Rebuild our World from Scratch; Origins: How the Earth Made Us; and Being Human: How Our Biology Shaped World History.

http://www.lewisdartnell.com
http://lewisdartnell.com/en-gb/2013/11/the-knowledge-how-to-rebuild-our-world-from-scratch
www.penguin.co.uk/books/433955/origins-by-lewis-dartnell/9781784705435
www.penguin.co.uk/books/442759/being-human-by-dartnell-lewis/9781847926708

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photo credit: Shortlist/Paul Stuart

05 Feb 2024JONATHAN YEO - Celebrated Portrait Artist on the Importance of Connection & Intuitive Intelligence00:47:28

How can the arts help cultivate our intuitive intelligence? What does visual art teach us about consciousness and the human condition?   

Jonathan Yeo is one of the world’s leading figurative artists and portrait painters. From celebrated figures such as Sir David Attenborough, peace activist Malala Yousafzai, the Duke of Edinburgh, Nicole Kidman, and Tony Blair, sitting for a portrait with Yeo is a provisional necessity for any 21st century icon. His work, which has been exhibited in museums and galleries around the world, is the subject of several major mid-career retrospectives in the UK and internationally. Yeo’s course on portrait painting is available now on BBC Maestro.

"I'm optimistic about education. There will likely be more traffic between technology and the arts. The tech world needs more creative-minded people and less literal people who have some understanding of how things work.

With Jony Ive, you've got someone who designed the iPhone and was very interested in photography himself. We were talking about doing a portrait. He mentioned that he'd been fascinated by self-portraiture as a kid, so much so that when he was doing his industrial design degree, he wrote his thesis on artists' self-portraits. Fast forward a few years, and we are all taking photos every day and learning really fast how to compose images and read images and why they've been cropped in a certain way. All these things, which were probably the preserve of artists and art historians in the past, are suddenly things that kids are thinking about because it's the way they communicate with each other. So I think that that shift is interesting.

Painting is a two-dimensional thing. You're basically taking real, three-dimensional things and making them into fake, two-dimensional ones. When you get into the 3D space, some of those distinctions aren't there anymore. I remember when I showed David Hockney the VR project I'd been working on a few years ago, and he put his finger on this quite well. Most art is about perspective. Certainly, for what he is interested in. As soon as you see something in 3D, whether it's a physical sculpture or a virtual object, that's not there anymore because you're in the space with whatever's being shown, so you're in a very different place."

www.jonathanyeo.com
www.bbcmaestro.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Images courtesy of Jonathan Yeo

05 Feb 2024How Will AI & New Technologies Change the Role of Artists? - Highlights - JONATHAN YEO00:12:19

"I'm optimistic about education. There will likely be more traffic between technology and the arts. The tech world needs more creative-minded people and less literal people who have some understanding of how things work.

With Jony Ive, you've got someone who designed the iPhone and was very interested in photography himself. We were talking about doing a portrait. He mentioned that he'd been fascinated by self-portraiture as a kid, so much so that when he was doing his industrial design degree, he wrote his thesis on artists' self-portraits. Fast forward a few years, and we are all taking photos every day and learning really fast how to compose images and read images and why they've been cropped in a certain way. All these things, which were probably the preserve of artists and art historians in the past, are suddenly things that kids are thinking about because it's the way they communicate with each other. So I think that that shift is interesting.

Painting is a two-dimensional thing. You're basically taking real, three-dimensional things and making them into fake, two-dimensional ones. When you get into the 3D space, some of those distinctions aren't there anymore. I remember when I showed David Hockney the VR project I'd been working on a few years ago, and he put his finger on this quite well. Most art is about perspective. Certainly, for what he is interested in. As soon as you see something in 3D, whether it's a physical sculpture or a virtual object, that's not there anymore because you're in the space with whatever's being shown, so you're in a very different place."

Jonathan Yeo is one of the world’s leading figurative artists and portrait painters. From celebrated figures such as Sir David Attenborough, peace activist Malala Yousafzai, the Duke of Edinburgh, Nicole Kidman, and Tony Blair, sitting for a portrait with Yeo is a provisional necessity for any 21st century icon. His work, which has been exhibited in museums and galleries around the world, is the subject of several major mid-career retrospectives in the UK and internationally. Yeo’s course on portrait painting is available now on BBC Maestro.

www.jonathanyeo.com
www.bbcmaestro.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Images courtesy of Jonathan Yeo

07 Feb 2024Can AI help us understand animal intelligence? - Highlights - POORVA JOSHIPURA00:10:14

“With AI, I do think it is our societal responsibility to be aware of how it can be used to harm animals. For instance, there's the worry that the same types of ways that AI might help, for instance, protect against poachers, that same technology can also be used to find the animals to poach or to damage wildlife, not only protect wildlife. So there's that concern. There's a concern about factory farming and how animals are already wholly disregarded in that process. And AI could even further automate that process to where there's no consideration of the animals at all, even worse than what goes on today. At the same time, AI can be and already is starting to be used for things like doing way better than what we're able to do in terms of determining how well a drug might behave or how a chemical might react by way of looking at all of the data together that exists and drawing a conclusion way better than a human being can do. There are ways that AI is already being used to reduce how animals are used in the laboratory setting. There's a lot of research going on right now about deciphering what animals are saying. The question is, are we going to listen to those animals?”

Poorva Joshipura is PETA U.K. Senior Vice President. She is the Author of Survival at Stake: How Our Treatment of Animals is Key to Human Existence and For a Moment of Taste: How What You Eat Impacts Animals, the Planet and Your Health.

https://usw2.nyl.as/t1/24/2jdwp5ogezjqb5wxg76eqfqeq/0/14474d94f4e832cd573ffc39be471e57616314b12314a26ca7dd9c2bbf559ac0

www.harpercollins.com/products/for-a-moment-of-taste-poorva-joshipura?variant=39399505592354

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

15 Feb 2024How will AI change the film and television industry? - ALAN POUL - ALAN POUL - Emmy & Golden Globe-winning Producer/Director00:12:55

"There will come a time when AI will have consumed and devoured all the works of all the great filmmakers. And you'll be able to say, I want you to cut this scene as if it was in an Antonioni film. Or I want you to cut this scene as if it was in a Sam Peckinpah film. And it will do the work of the edit. So the finishing touches will probably always be human, but the amount of creative work that's going to be able to be offloaded to AI is something that we don't fully comprehend yet."

Alan Poul is an Emmy, Golden Globe, DGA, and Peabody Award-winning producer and director of film and television. He is Executive Producer and Director on the Max Original drama series Tokyo Vice, written by Tony Award-winning playwright J.T. Rogers and starring Ansel Elgort and Ken Watanabe, as an American journalist in Japan and his police detective mentor. Poul is perhaps best known for producing all five seasons of HBO's Six Feet Under, all four of Armistead Maupin's Tales of the City miniseries, My So-Called Life, The Newsroom, Swingtown, and The Eddy, which he developed with director Damien Chazelle. His feature film producing credits include Paul Schrader's Mishima and Light of Day, and Ridley Scott's Black Rain.

https://www.imdb.com/name/nm0693561
https://www.imdb.com/title/tt2887954/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

20 Feb 2024What is the impact of AI on how we live, burn energy, and mitigate climate change? - KATHLEEN ROGERS - President, EarthDay.ORG00:44:37

How can we reimagine a world without plastic? How can we push governments and companies to admit what they know about the health impacts of plastics and change public policy?

Kathleen Rogers is the President of EARTHDAY.ORG. Under her leadership, it has grown into a global year-round policy and activist organization with an international staff. She has been at the vanguard of developing campaigns and programs focused on diversifying the environmental movement, highlighted by Campaign for Communities and Billion Acts of Green. Prior to her work at EARTHDAY.ORG, Kathleen held senior positions with the National Audubon Society, the Environmental Law Institute, and two U.S. Olympic Organizing Committees. She’s a graduate of the University of California at Davis School of Law, where she served as editor-in-chief of the law review and clerked in the United States District Court for the District of Columbia. EARTHDAY.ORG’s 2024 theme, Planet vs. Plastics, calls to advocate for widespread awareness of the health risks of plastics, rapidly phase out all single-use plastics, urgently push for a strong UN Treaty on Plastic Pollution, and demand to end fast fashion. Let's build a plastic-free planet for generations to come.

"The world recognizes that plastics have imperiled our future. Many environmentalists, myself included, view plastics as on par with, if not worse than, climate change because we do see a little light at the end of the tunnel on climate change. Babies vs. Plastics is a collection of studies, and we particularly focused on children and babies because their bodies and brains are more impacted than adults by the 30, 000 chemicals that assault us every day.

We have histories littered with dozens of stories and court cases of malfeasance where companies knew for years before we, the public, did about the impacts. Climate change is a perfect example because we know Exxon scientists knew in 1957 that burning fossil fuels was creating climate change and that eventually, the temperature of the planet would heat up, and they hid it from us for 50-plus years. And more and more reports are coming out every day about what companies and some governments know. Tobacco companies knew tobacco caused cancer for decades before our scientists did. And so we have the same problem with plastics.”

Planet vs. Plastics www.earthday.org
Sign The Global Plastic Treaty Petition
https://action.earthday.org/global-plastics-treaty
Toolkits: https://www.earthday.org/our-toolkits
NDC Guide for Climate Education
https://www.earthday.org/wp-content/uploads/2023/11/NDC-GUIDE-Final.pdf

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photos courtesy of EARTHDAY.ORG

23 Feb 2024Tech, Spirituality & Selfhood: TARA ISABELLA BURTON - Author of Here in Avalon, Social Creature, & Self-Made00:09:20

"So everyone should probably throw their smartphones in a river, myself included. And I think that it is hard. There's never going to be a version where you get the right answer, and suddenly your life falls into place, and everything's perfect. And that's not what it's supposed to be for anyway. And I think there is a tendency in self-care circles that once we solve our demons and figure out our path in life, we are in touch with the vibes of the universe. Like suddenly, we're going to be wealthy and healthy and happy and have the perfect marriage. And I think the questions of philosophical inquiry are about how to live a good life, but that's not the same thing as assuming, as so much of contemporary wellness culture assumes, that a normatively successful life will come to us by virtue of doing the right things."

Tara Isabella Burton is the author of the novels Social Creature, The World Cannot Give, and Here in Avalon, as well as the nonfiction books Strange Rites: New Religions for a Godless World and Self-Made: Curating Our Image from Da Vinci to the Kardashians. She is currently working on a history of magic and modernity, to be published by Convergent in late 2025. Her fiction and nonfiction have appeared in The New York Times, National Geographic,  Granta, The Washington Post, The Wall Street Journal, and other publications.

www.taraisabellaburton.com
www.simonandschuster.com/books/Here-in-Avalon/Tara-Isabella-Burton/9781982170097?fbclid=IwAR30lnvlXMrDJtCq_568jUM3hvzr6yUz_GUUZSkbR2RarreOF6PMcvhabBg

www.amazon.com/dp/B07W56MQLJ/ref=sr_1_fkmr0_1?keywords=strange+rites+tara+isabella+burton&qid=1565365017&s=gateway&sr=8-1-fkmr0

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Feb 2024Comics, Music, Ethics & AI: KYLE HIGGINS, KARINA MANASHIL & KID CUDI on the Making of Moon Man01:07:18

What does the future hold for our late-stage capitalist society with mega-corporations owning and controlling everything? How can the world-building skills of the makers of films and comics help us imagine a better future?

Kyle Higgins is an Eisner award-nominated #1 New York Times best-selling comic book author and award-winning filmmaker known for his work on DC Comics’ Batman titles as well as his critically-acclaimed reinventions of Mighty Morphin Power Rangers for Boom! Studios/Hasbro, Ultraman for Marvel Comics, and his creator-owned series Radiant Black, NO/ONE and Deep Cuts for Image Comics. Kyle is the founder and creative director of Black Market Narrative and The Massive-Verse.

Karina Manashil is the President of MAD SOLAR. After graduating from Chapman University with a BFA in Film Production, she began her career in the mailroom at WME where she became a Talent Agent. In 2020, she partnered with Scott Mescudi and Dennis Cummings to found MAD SOLAR. Its first release was the documentary “A Man Named Scott” (Amazon), and she then went on to Executive Produce Ti West trilogy “X,” “Pearl” and “MaXXXine” (A24). Manashil received an Emmy nomination as an Executive Producer on the Netflix animated event “Entergalactic." She also produced the Mescudi/Kyle Higgins comic book “Moon Man” which launched through Image Comics. She is next producing the upcoming Mescudi/Sam Levinson/The Lucas Bros film “HELL NAW” (Sony) and the animated feature “Slime” from auteur animator Jeron Braxton.
“I write science fiction, so it's fascinating from a technological standpoint, but we have dozens and dozens of years of science fiction warning us about technology unchecked. The irony is that now so many of those science fiction stories have probably been used to feed the AI training algorithms that they are now repurposing and ripping off. So it's very ironic in that regard to me. I've heard artists refer to AI as a plagiarism machine, and I do think that's a very apt descriptor. I have a lot of friends who are affected by this. And these tech companies think if we can make it easier and cheaper to capture some aspect of the human spirit and then, by God, isn't that best for shareholders?” -Kyle Higgins

moonmancomics.com
https://imagecomics.com
https://www.imdb.com/name/nm3556462/?ref_=fn_al_nm_1
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Audio of Scott Mescudi courtesy of Mad Solar and Moon Man Comic Book Release and Revenge Of (Eagle Rock, CA, Jan 31, 2024)

27 Feb 2024What does the future hold for our late-stage capitalist society with mega-corps controlling everything? - Highlights - KYLE HIGGINS, KARINA MANASHIL & KID CUDI00:14:22

“I write science fiction, so it's fascinating from a technological standpoint, but we have dozens and dozens of years of science fiction warning us about technology unchecked. The irony is that now so many of those science fiction stories have probably been used to feed the AI training algorithms that they are now repurposing and ripping off. So it's very ironic in that regard to me. I've heard artists refer to AI as a plagiarism machine, and I do think that's a very apt descriptor. I have a lot of friends who are affected by this. And these tech companies think if we can make it easier and cheaper to capture some aspect of the human spirit and then, by God, isn't that best for shareholders?” -Kyle Higgins

Kyle Higgins is an Eisner award-nominated #1 New York Times best-selling comic book author and award-winning filmmaker known for his work on DC Comics’ Batman titles as well as his critically-acclaimed reinventions of Mighty Morphin Power Rangers for Boom! Studios/Hasbro, Ultraman for Marvel Comics, and his creator-owned series Radiant Black, NO/ONE and Deep Cuts for Image Comics. Kyle is the founder and creative director of Black Market Narrative and The Massive-Verse.

Karina Manashil is the President of MAD SOLAR. After graduating from Chapman University with a BFA in Film Production, she began her career in the mailroom at WME where she became a Talent Agent. In 2020, she partnered with Scott Mescudi and Dennis Cummings to found MAD SOLAR. Its first release was the documentary “A Man Named Scott” (Amazon), and she then went on to Executive Produce Ti West trilogy “X,” “Pearl” and “MaXXXine” (A24). Manashil received an Emmy nomination as an Executive Producer on the Netflix animated event “Entergalactic." She also produced the Mescudi/Kyle Higgins comic book “Moon Man” which launched through Image Comics. She is next producing the upcoming Mescudi/Sam Levinson/The Lucas Bros film “HELL NAW” (Sony) and the animated feature “Slime” from auteur animator Jeron Braxton.

moonmancomics.com
https://imagecomics.com
https://www.imdb.com/name/nm3556462/?ref_=fn_al_nm_1
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Audio of Scott Mescudi courtesy of Mad Solar and Moon Man Comic Book Release and Revenge Of (Eagle Rock, CA, Jan 31, 2024)

12 Mar 2024Will human efficiency destroy the planet and us? - DR. LUDOVIC SLIMAK - Author of The Naked Neanderthal00:56:51

Who were the Neanderthals? And what can our discoveries about them teach us about intelligence, our extractivist relationship to the planet, and what it means to be human?

Ludovic Slimak is a paleoanthropologist at the University of Toulouse in France and Director of the Grotte Mandrin research project. His work focuses on the last Neanderthal societies, and he is the author of several hundred scientific studies on these populations. His research has been featured in Nature, Science, the New York Times, and other publications. He is the author of The Naked Neanderthal: A New Understanding of the Human Creature.

"AI is a fascinating question. You know children are sponges. They look and say this is something different. So your values are no longer good enough for the future. And this is what we are confronted with with AI. And that's a fantastic tool, but at a certain moment, this technology will evolve and become super efficient and smarter than we are. And at this moment, our children could simply reject everything that makes us human. And our society at this moment, and maybe that of our humanity, could collapse on itself. 

I begin the book with a question of intelligence outside of Earth. That could be AI, that could be extraterrestrials. This is fascinating for us because this is another intelligence. Now, we have created AI, and we are fascinated by what we see because we can discuss with an AI and it's very clear that the AI understands our concepts and responds with our own concepts."

http://ww5.pegasusbooks.com/books/the-naked-neanderthal-9781639366163-hardcover
https://lampea.cnrs.fr/spip.php?article3767
www.odilejacob.fr/catalogue/sciences-humaines/archeologie-paleontologie-prehistoire/dernier-neandertalien_9782415004927.php

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

12 Mar 2024Who were the Neanderthals? - Highlights - DR. LUDOVIC SLIMAK00:14:07

"AI is a fascinating question. You know children are sponges. They look and say this is something different. So your values are no longer good enough for the future. And this is what we are confronted with with AI. And that's a fantastic tool, but at a certain moment, this technology will evolve and become super efficient and smarter than we are. And at this moment, our children could simply reject everything that makes us human. And our society at this moment, and maybe that of our humanity, could collapse on itself. 

I begin the book with a question of intelligence outside of Earth. That could be AI, that could be extraterrestrials. This is fascinating for us because this is another intelligence. Now, we have created AI, and we are fascinated by what we see because we can discuss with an AI and it's very clear that the AI understands our concepts and responds with our own concepts."

Ludovic Slimak is a paleoanthropologist at the University of Toulouse in France and Director of the Grotte Mandrin research project. His work focuses on the last Neanderthal societies, and he is the author of several hundred scientific studies on these populations. His research has been featured in Nature, Science, the New York Times, and other publications. He is the author of The Naked Neanderthal: A New Understanding of the Human Creature.

http://ww5.pegasusbooks.com/books/the-naked-neanderthal-9781639366163-hardcover
https://lampea.cnrs.fr/spip.php?article3767
www.odilejacob.fr/catalogue/sciences-humaines/archeologie-paleontologie-prehistoire/dernier-neandertalien_9782415004927.php

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

14 Mar 2024Beyond the Horizon: Pioneering Green Aviation with BERTRAND PICCARD - Aviator, Explorer, Environmentalist00:55:23

What is the future of green aviation? How do we share environmental solutions to unite people and change the climate narrative from sacrifice and fear to enthusiasm and hope?

Bertrand Piccard is a notable Swiss environmentalist, explorer, author, and psychiatrist. His ventures include being the first to travel around the world in a non-stop balloon flight and years later in a solar-powered airplane. He is regarded as a pioneer in clean technology. Piccard is also the founder of the Solar Impulse Foundation, which has identified over 1500 actionable and profitable climate solutions and connects them with investors. As a UN Ambassador for the Environment, his goal is to convince leaders of the viability of a zero-carbon economy, which he will demonstrate via his next emission-free project Climate Impulse, a green hydrogen-powered airplane that can fly nonstop around the earth.

"So it's what you do with the invention that's important. And with AI, it's exactly the same. If you make deep fakes, I think you can just destroy trust and confidence in the world because you will never know what is true and what is false, but if you use AI to balance the grid, to incorporate renewable energies that are intermittent, the storage, the usage by consumers, then you begin to be much more efficient because you use energy at the right moment, in the right way, at the right place, for the right people, you will save a lot of energy. So, in the end, it's always human behavior that decides if an invention is good or not. What I would really like to avoid is AI being used for useless things."

http://www.solarimpulse.com
https://climateimpulse.org/
https://bertrandpiccard.com/

Photos:
COPSummit
Bertrand Piccard with Simon Stiell, Executive Secretary of the UNFCCC
Ville de Demain exhibition, Cité des sciences et de l'industrie, Paris

14 Mar 2024Revolutionizing Sustainability: BERTRAND PICCARD's Path to a Cleaner Planet - Highlights00:11:26

"So it's what you do with the invention that's important. And with AI, it's exactly the same. If you make deep fakes, I think you can just destroy trust and confidence in the world because you will never know what is true and what is false, but if you use AI to balance the grid, to incorporate renewable energies that are intermittent, the storage, the usage by consumers, then you begin to be much more efficient because you use energy at the right moment, in the right way, at the right place, for the right people, you will save a lot of energy. So, in the end, it's always human behavior that decides if an invention is good or not. What I would really like to avoid is AI being used for useless things."

Bertrand Piccard is a notable Swiss environmentalist, explorer, author, and psychiatrist. His ventures include being the first to travel around the world in a non-stop balloon flight and years later in a solar-powered airplane. He is regarded as a pioneer in clean technology. Piccard is also the founder of the Solar Impulse Foundation, which has identified over 1500 actionable and profitable climate solutions and connects them with investors. As a UN Ambassador for the Environment, his goal is to convince leaders of the viability of a zero-carbon economy, which he will demonstrate via his next emission-free project Climate Impulse, a green hydrogen-powered airplane that can fly nonstop around the earth.

http://www.solarimpulse.com
https://climateimpulse.org/
https://bertrandpiccard.com/

Photos:
Bertrand Piccard with Ilham Kadri, CEO Syensqo (main technological partner of Climate Impulse)
Bertrand Piccard @ Solar Impulse, Jean Revillard

21 Mar 2024Can AI help us understand animal language? - Author SY MONTGOMERY & Illustrator MATT PATTERSON00:14:51

"I would love it if AI could decode some animal languages that humans have not been able to do, like the whistles and clicks of whales and dolphins. Our human limit limitations have blinded us to so much of what animals are saying and telling us.

More than anything, though, and I don't know if AI can do this, but we need something to talk our leaders into having some sense about preserving our world. Anything that AI can bring to ameliorate global climate change, to catch the poachers who are killing turtles and other wildlife, and anything AI can teach us about how not to consume the entire world like some horrible fire...let's leave some space for the animals.”

Author Sy Montgomery and illustrator Matt Patterson are naturalists, adventurers, and creative collaborators. Montgomery has published over thirty acclaimed nonfiction books for adults and children and received numerous honors, including lifetime achievement awards from the Humane Society and the New England Booksellers Association.

Patterson’s illustrations have been featured in several books and magazines, such as Yankee Magazine and Fine Art Connoisseur. He is the recipient of Roger Tory Peterson Wild American Art Award, National Outdoor Book Award for Nature and the Environment, and other honors. Most recently, Patterson provided illustrations for Freshwater Fish of the Northeast.

Their joint books are Of Time and Turtles: Mending the World, Shell by Shattered Shell and The Book of the Turtle. Montgomery’s other books include The Soul of an Octopus, The Hawk’s Way and The Secrets of the Octopus (published in conjunction with a National Geographic TV series).

www.mpattersonart.com
https://symontgomery.com
www.harpercollins.com/products/of-time-and-turtles-sy-montgomery?variant=41003864817698
www.harpercollins.com/products/the-book-of-turtles-sy-montgomery?variant=40695888609314
https://press.uchicago.edu/ucp/books/book/distributed/F/bo215806915.html

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

26 Mar 2024The Pursuit of Happiness - JEFFREY ROSEN - President & CEO of the National Constitution Center00:12:27

"There's no more empowering act for me than simply turning the devices off. The simple rule that I'm not allowed to browse in the morning until I've done my reading has opened up worlds. So much of tech and the net are designed to capture our attentions, to turn us into consumers rather than citizens, to fan our base passions and emotions, and to send us down rabbit holes. That the best thing we can do is to turn it off.

'The pictures in our minds,' I guess that was Walter Lippmann, are confirmed by the enlightenment empiricists like John Locke, who insists that our reality is shaped by our external sensations and what we put into our minds. And then, of course, we are what we think. Life shaped by the mind, as The Dhammapada states. And then, the great injunction that my dad used to quote from Paracelsus, 'As we imagine ourselves to be, so shall we be.' “

Jeffrey Rosen is President and CEO of the National Constitution Center, where he hosts We the People, a weekly podcast of constitutional debate. He is also a professor of law at the George Washington University Law School and a contributing editor at The Atlantic. Rosen is a graduate of Harvard College, Oxford University, and Yale Law School. He is the author of seven previous books, including the New York Times bestseller Conversations with RBG: Justice Ruth Bader Ginsburg on Life, Love, Liberty, and Law. His essays and commentaries have appeared in The New York Times Magazine; on NPR; in The New Republic, where he was the legal affairs editor; and in The New Yorker, where he has been a staff writer. His latest book is The Pursuit of Happiness: How Classical Writers on Virtue Inspired the Lives of the Founders and Defined America.

https://constitutioncenter.org/about/board-of-trustees/jeffrey-rosen
www.simonandschuster.com/books/The-Pursuit-of-Happiness/Jeffrey-Rosen/9781668002476
https://constitutioncenter.org/news-debate/podcasts

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

29 Mar 2024Consciousness, AI & Creativity with DUSTIN O’HALLORAN - Emmy Award-winning Composer00:51:02

What will happen when Artificial General Intelligence arrives? What is the nature of consciousness? How are music and creativity pathways for reconnecting us to our humanity and the natural world?

Dustin O’Halloran is a pianist and composer and member of the band A Winged Victory for the Sullen. Winner of a 2015 Emmy Award for his main title theme to Amazon's comedy drama Transparent, he was also nominated for an Oscar, a Golden Globe, and a BAFTA for his score for Lion, written in collaboration with Volker Bertelmann (aka Hauschka). He has composed for Wayne McGregor (The Royal Ballet, London), Sofia Coppola’s Marie Antoinette, Ammonite starring Kate Winslet, and The Essex Serpent starring Claire Danes. He produced Katy Perry’s “Into Me You See” from her album Witness and appears on Leonard Cohen’s 2019 posthumous album Thanks For The Dance. With six solo albums under his name, his latest album 1 0 0 1, which explores ideas of technology, humanity and mind-body dualism, is available on Deutsche Grammophon.

“The album 1 0 0 1 is really like a journey from our connection with nature to where we are now, in this moment where we're playing with technology. We're almost in this hybrid space, not fully understanding where it's going. And it's very deep in our subconscious and probably much greater than we realize. And it sort of ends in this space where the consciousness of what we're creating, it's going to be very separate from us. And I believe that's kind of where it's heading – the idea of losing humanity, losing touch with nature and becoming outside of something that we have created."

https://dustinohalloran.com/
www.deutschegrammophon.com/en/artists/dustin-o-halloran
www.imdb.com/name/nm0641169/bio/?ref_=nm_ov_bio_sm

Music courtesy of Dustin O’Halloran and Deutsche Grammophon

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

29 Mar 2024How can music help us expand our understanding of consciousness & AI? - Highlights - DUSTIN O’HALLORAN00:10:16

“The album 1 0 0 1 is really like a journey from our connection with nature to where we are now, in this moment where we're playing with technology. We're almost in this hybrid space, not fully understanding where it's going. And it's very deep in our subconscious and probably much greater than we realize. And it sort of ends in this space where the consciousness of what we're creating, it's going to be very separate from us. And I believe that's kind of where it's heading – the idea of losing humanity, losing touch with nature and becoming outside of something that we have created."

Dustin O’Halloran is a pianist and composer and member of the band A Winged Victory for the Sullen. Winner of a 2015 Emmy Award for his main title theme to Amazon's comedy drama Transparent, he was also nominated for an Oscar, a Golden Globe, and a BAFTA for his score for Lion, written in collaboration with Volker Bertelmann (aka Hauschka). He has composed for Wayne McGregor (The Royal Ballet, London), Sofia Coppola’s Marie Antoinette, Ammonite starring Kate Winslet, and The Essex Serpent starring Claire Danes. He produced Katy Perry’s “Into Me You See” from her album Witness and appears on Leonard Cohen’s 2019 posthumous album Thanks For The Dance. With six solo albums under his name, his latest album 1 0 0 1, which explores ideas of technology, humanity and mind-body dualism, is available on Deutsche Grammophon.

https://dustinohalloran.com/
www.deutschegrammophon.com/en/artists/dustin-o-halloran
www.imdb.com/name/nm0641169/bio/?ref_=nm_ov_bio_sm

Music courtesy of Dustin O’Halloran and Deutsche Grammophon

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

16 Apr 2024How climate change is making us sick, angry & anxious - CLAYTON ALDERN - Neuroscientist turned Eco-Journalist00:53:01

How does a changing climate affect our minds, brains and bodies?

Clayton Page Aldern is an award winning neuroscientist turned environmental journalist whose work has appeared in The Atlantic, The Guardian, The Economist, and Grist, where he is a senior data reporter. A Rhodes Scholar, he holds a Master's in Neuroscience and a Master's in Public Policy from the University of Oxford. He is also a research affiliate at the Center for Studies in Demography and Ecology at the University of Washington. He is the author of The Weight of Nature: How a Changing Climate Changes Our Minds, Brains, and Bodies, which explores the neurobiological impacts of rapid environmental change.

"So, I am a data reporter at Grist. And what does that mean? I'm building statistical models of phenomena. I'm writing web scrapers and building data visualizations, right? I have quite a technical job in terms of my relationship with the field of journalism. I just don't think that those tools ought to be put on some kind of pedestal and framed as the be-all end all of the possibility of the field, right? I think that data science, artificial intelligence, and the advent of these new LLMs they're useful tools to add to the journalistic toolkit. We don't know what the ultimate effect of AI is going to be on journalism, but I think journalism is maybe going to look a little bit different in 20 years."

https://claytonaldern.com
www.penguinrandomhouse.com/books/717097/the-weight-of-nature-by-clayton-page-aldern
https://csde.washington.edu

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

16 Apr 2024How does a changing climate affect our minds, brains & bodies? - Highlights - CLAYTON ALDERN00:13:28

"So, I am a data reporter at Grist. And what does that mean? I'm building statistical models of phenomena. I'm writing web scrapers and building data visualizations, right? I have quite a technical job in terms of my relationship with the field of journalism. I just don't think that those tools ought to be put on some kind of pedestal and framed as the be-all end all of the possibility of the field, right? I think that data science, artificial intelligence, and the advent of these new LLMs they're useful tools to add to the journalistic toolkit. We don't know what the ultimate effect of AI is going to be on journalism, but I think journalism is maybe going to look a little bit different in 20 years."

Clayton Page Aldern is an award winning neuroscientist turned environmental journalist whose work has appeared in The Atlantic, The Guardian, The Economist, and Grist, where he is a senior data reporter. A Rhodes Scholar, he holds a Master's in Neuroscience and a Master's in Public Policy from the University of Oxford. He is also a research affiliate at the Center for Studies in Demography and Ecology at the University of Washington. He is the author of The Weight of Nature: How a Changing Climate Changes Our Minds, Brains, and Bodies, which explores the neurobiological impacts of rapid environmental change.

https://claytonaldern.com
www.penguinrandomhouse.com/books/717097/the-weight-of-nature-by-clayton-page-aldern
https://csde.washington.edu

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

19 Apr 2024Exploring Science, Music, AI & Consciousness with MAX COOPER - Highlights00:13:09

“As technology becomes more dominant, the arts become ever more important for us to stay in touch the things that the sciences can't tackle. What it's actually like to be a person? What's actually important? We can have this endless progress inside this capitalist machine for greater wealth and longer life and more happiness, according to some metric. Or we can try and quantify society and push it forward. Ultimately, we all have to decide what's important to us as humans, and we need the arts to help with that. So, I think what's important really is just exposing ourselves to as many different ideas as we can, being open-minded, and trying to learn about all facets of life so that we can understand each other as well. And the arts is an essential part of that.”

How is being an artist different than a machine that is programmed to perform a set of actions? How can we stop thinking about artworks as objects, and start thinking about them as triggers for experiences? In this conversation with Max Cooper, we discuss the beauty and chaos of nature and the exploration of technology music and consciousness.

Max Cooper is a musician with a PhD in computational biology. He integrates electronic music with immersive video projections inspired by scientific exploration. His latest project, Seme, commissioned by the Salzburg Easter Festival, merges Italian musical heritage with contemporary techniques, was also performed at the Barbican in London.

He supplied music for a video narrated by Greta Thunberg and Pope Francis for COP26.

In 2016, Cooper founded Mesh, a platform to explore the intersection of music, science and art. His Observatory art-house installation is on display at Kings Cross until May 1st.

https://maxcooper.net
https://osterfestspiele.at/en/programme/2024/electro-2024
https://meshmeshmesh.net
www.kingscross.co.uk/event/the-observatory

The music featured on this episode was Palestrina Sicut, Cardano Circles, Fibonacci Sequence, Scarlatti K141. Music is from Seme and is courtesy of Max Cooper.

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

18 Apr 2024What can music teach us that science can’t? - MAX COOPER - Musician, Fmr. Computational Biologist00:50:10

How is being an artist different than a machine that is programmed to perform a set of actions? How can we stop thinking about artworks as objects, and start thinking about them as triggers for experiences? In this conversation with Max Cooper, we discuss the beauty and chaos of nature and the exploration of technology music and consciousness.

Max Cooper is a musician with a PhD in computational biology. He integrates electronic music with immersive video projections inspired by scientific exploration. His latest project, Seme, commissioned by the Salzburg Easter Festival, merges Italian musical heritage with contemporary techniques, was also performed at the Barbican in London.

He supplied music for a video narrated by Greta Thunberg and Pope Francis for COP26.

In 2016, Cooper founded Mesh, a platform to explore the intersection of music, science and art. His Observatory art-house installation is on display at Kings Cross until May 1st.

“As technology becomes more dominant, the arts become ever more important for us to stay in touch the things that the sciences can't tackle. What it's actually like to be a person? What's actually important? We can have this endless progress inside this capitalist machine for greater wealth and longer life and more happiness, according to some metric. Or we can try and quantify society and push it forward. Ultimately, we all have to decide what's important to us as humans, and we need the arts to help with that. So, I think what's important really is just exposing ourselves to as many different ideas as we can, being open-minded, and trying to learn about all facets of life so that we can understand each other as well. And the arts is an essential part of that.”

https://maxcooper.net
https://osterfestspiele.at/en/programme/2024/electro-2024
https://meshmeshmesh.net
www.kingscross.co.uk/event/the-observatory

The music featured on this episode was Palestrina Sicut, Cardano Circles, Fibonacci Sequence, Scarlatti K141. Music is from Seme and is courtesy of Max Cooper.

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

25 Apr 2024Feminism, Resistance & AI in the Global South w/ INTAN PARAMADITHA - Author of The Wandering00:11:50

“I've been playing with AI just to see what it can do. People who are not privileged with the skills of conceptualizing, the skills of abstract thinking, they will be replaced. And I'm just thinking about people from the Global South at this moment. People from the Global South  have been working as supporters. They do a lot of support for creative work of entrepreneurs in the Global North. They do social media. They create content and things like that. The people who would provide the support live in, let's say, the Philippines. So, what I'm worried about is how AI technology could take the jobs of people who are not really trained to sort of do conceptual thinking.”

Intan Paramaditha is a writer and an academic. Her novel The Wandering (Harvill Secker/ Penguin Random House UK), translated from the Indonesian language by Stephen J. Epstein, was nominated for the Stella Prize in Australia and awarded the Tempo Best Literary Fiction in Indonesia, English PEN Translates Award, and PEN/ Heim Translation Fund Grant from PEN America. She is the author of the short story collection Apple and Knife, the editor of Deviant Disciples: Indonesian Women Poets, part of the Translating Feminisms series of Tilted Axis Press and the co-editor of The Routledge Companion to Asian Cinemas (forthcoming 2024). Her essay, “On the Complicated Questions Around Writing About Travel,” was selected for The Best American Travel Writing 2021. She holds a Ph.D. from New York University and teaches media and film studies at Macquarie University, Sydney.

https://intanparamaditha.com
www.penguinrandomhouse.ca/books/626055/the-wandering-by-intan-paramaditha/9781787301184

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

26 Apr 2024Author of Emotional Intelligence DANIEL GOLEMAN on Focus, Balance & Optimal Living00:53:01

How can we enhance our emotional intelligence and avoid burnout in a changing world? How can we regain focus and perform in an optimal state? What do we mean by ecological intelligence?

Daniel Goleman is an American psychologist, author, and science journalist. Before becoming an author, Goleman was a science reporter for the New York Times for 12 years, covering psychology and the human brain. In 1995, Goleman published Emotional Intelligence, a New York Times bestseller. In his newly published book Optimal, Daniel Goleman discusses how people can enter an optimal state of high performance without facing symptoms of burnout in the workplace.

“AI is brilliant at cognitive empathy. However, the next kind is emotional empathy. Emotional empathy means: I know what you feel because I'm feeling it too. And this has to do with circuitry in the fore part of the brain, which creates a brain-to-brain circuit that's automatic, unconscious, and instantaneous. And emotions pass very  well across that. I think AI might flunk here because it has no emotion. It can mimic empathy, but it doesn't really feel empathy. The third kind is empathic concern. Technically, it means caring. It's the basis of love. It's the same circuitry as a parent's love for a child, actually. But I think that leaders need this very much.
AI has no emotion, so it doesn't have emotional self-awareness. It can't tune in. I don't think it can be empathic because AI is a set of codes, basically. It doesn't have the ability to manage emotion because it doesn't have emotion. It's interesting. I was just talking to a group at Microsoft, which is one of the leading developers of AI, and one of the people there was talking about Inculcating love into AI or caring into AI as maybe an antidote to the negative potential of AI for humanity. But I think there will always be room for the human, for a leader. I don't think that people will find that they can trust AI the same way they can trust a leader who cares.”

www.danielgoleman.info
www.harpercollins.com/products/optimal-daniel-golemancary-cherniss?variant=41046795288610

www.penguinrandomhouse.com/books/69105/emotional-intelligence-by-daniel-goleman/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

26 Apr 2024Emotional Intelligence in the Age of AI - Highlights - DANIEL GOLEMAN00:11:57

“AI is brilliant at cognitive empathy. However, the next kind is emotional empathy. Emotional empathy means: I know what you feel because I'm feeling it too. And this has to do with circuitry in the fore part of the brain, which creates a brain-to-brain circuit that's automatic, unconscious, and instantaneous. And emotions pass very  well across that. I think AI might flunk here because it has no emotion. It can mimic empathy, but it doesn't really feel empathy. The third kind is empathic concern. Technically, it means caring. It's the basis of love. It's the same circuitry as a parent's love for a child, actually. But I think that leaders need this very much.
AI has no emotion, so it doesn't have emotional self-awareness. It can't tune in. I don't think it can be empathic because AI is a set of codes, basically. It doesn't have the ability to manage emotion because it doesn't have emotion. It's interesting. I was just talking to a group at Microsoft, which is one of the leading developers of AI, and one of the people there was talking about Inculcating love into AI or caring into AI as maybe an antidote to the negative potential of AI for humanity. But I think there will always be room for the human, for a leader. I don't think that people will find that they can trust AI the same way they can trust a leader who cares.”

Daniel Goleman is an American psychologist, author, and science journalist. Before becoming an author, Goleman was a science reporter for the New York Times for 12 years, covering psychology and the human brain. In 1995, Goleman published Emotional Intelligence, a New York Times bestseller. In his newly published book Optimal, Daniel Goleman discusses how people can enter an optimal state of high performance without facing symptoms of burnout in the workplace.

www.danielgoleman.info
www.harpercollins.com/products/optimal-daniel-golemancary-cherniss?variant=41046795288610

www.penguinrandomhouse.com/books/69105/emotional-intelligence-by-daniel-goleman/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

03 May 2024How does the brain process emotions and music? JOSEPH LEDOUX - Neuroscientist, Author, Musician01:00:41

How does the brain process emotions? How are emotional memories formed and stored in the brain, and how do they influence behavior, perception, and decision-making? How does music help us understand our emotions, memories, and the nature of consciousness?

Joseph LeDoux is a Professor of Neural Science at New York University at NYU and was Director of the Emotional Brain Institute. His research primarily focuses on survival circuits, including their impacts on emotions, such as fear and anxiety. He has written a number of books in this field, including The Four Realms of Existence: A New Theory of Being Human, The Emotional Brain, Synaptic Self, Anxious, and The Deep History of Ourselves. LeDoux is also the lead singer and songwriter of the band The Amygdaloids.

“We've got four billion years of biological accidents that created all of the intricate aspects of everything about life, including consciousness. And it's about what's going on in each of those cells at the time that allows it to be connected to everything else and for the information to be understood as it's being exchanged between those things with their multifaceted, deep, complex processing.”

www.joseph-ledoux.com
www.cns.nyu.edu/ebi
https://amygdaloids.net
www.hup.harvard.edu/books/9780674261259

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Music courtesy of Joseph LeDoux

03 May 2024Exploring Consciousness, AI & Creativity with JOSEPH LEDOUX - Highlights00:14:25

“We've got four billion years of biological accidents that created all of the intricate aspects of everything about life, including consciousness. And it's about what's going on in each of those cells at the time that allows it to be connected to everything else and for the information to be understood as it's being exchanged between those things with their multifaceted, deep, complex processing.”

Joseph LeDoux is a Professor of Neural Science at New York University at NYU and was Director of the Emotional Brain Institute. His research primarily focuses on survival circuits, including their impacts on emotions, such as fear and anxiety. He has written a number of books in this field, including The Four Realms of Existence: A New Theory of Being Human, The Emotional Brain, Synaptic Self, Anxious, and The Deep History of Ourselves. LeDoux is also the lead singer and songwriter of the band The Amygdaloids.

www.joseph-ledoux.com
www.cns.nyu.edu/ebi
https://amygdaloids.net
www.hup.harvard.edu/books/9780674261259

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

14 May 2024How can physics help solve real world problems? - NEIL JOHNSON, Head of Dynamic Online Networks Lab00:50:41

How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?

Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media.

“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.

I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”

https://physics.columbian.gwu.edu/neil-johnsonhttps://donlab.columbian.gwu.edu

www.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast

14 May 2024Is understanding AI a bigger question than understanding the origin of the universe? - Highlights, NEIL JOHNSON00:15:38

“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.

I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”

How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?

Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media. https://physics.columbian.gwu.edu/neil-johnson https://donlab.columbian.gwu.edu

www.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast

17 May 2024How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE01:01:14

How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?

Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.

“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”

https://raphaelmilliere.com
https://researchers.mq.edu.au/en/persons/raphael-milliere

“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

17 May 2024What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE00:10:25

“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”

Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.

https://raphaelmilliere.com
https://researchers.mq.edu.au/en/persons/raphael-milliere

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

10 Jun 2024Can we have real conversations with AI? How do illusions help us make sense of the world? - Highlights - KEITH FRANKISH00:11:11

“Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”

Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.

www.keithfrankish.com
www.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42
www.imprint.co.uk/product/illusionism

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

10 Jun 2024Is Consciousness an Illusion? with Philosopher KEITH FRANKISH00:57:24

Is consciousness an illusion? Is it just a complex set of cognitive processes without a central, subjective experience? How can we better integrate philosophy with everyday life and the arts?

Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.

“Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”

www.keithfrankish.com
www.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42www.imprint.co.uk/product/illusionism

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

14 Jun 2024What Role Do AI & Computational Language Play in Solving Real-World Problems?00:57:15

How can computational language help decode the mysteries of nature and the universe? What is ChatGPT doing and why does it work? How will AI affect education, the arts and society?

Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business.

"Nobody, including people who worked on ChatGPT, really sort of expected this to work. It's something that we just didn't know scientifically what it would take to make something that was a fluent producer of human language. I think the big discovery is that this thing that has been sort of a proud achievement of our species, human language, is perhaps not as complicated as we thought it was. It's something that is more accessible to sort of simpler automation than we expected. And so, people have been asking me, when ChatGPT had come out, we were doing a bunch of things technologically around ChatGPT because kind of what, when ChatGPT is kind of stringing words together to make sentences, what does it do when it has to actually solve a computational problem? That's not what it does itself. It's a thing for stringing words together to make text. And so, how does it solve a computational problem? Well, like humans, the best way for it to do it is to use tools, and the best tool for many kinds of computational problems is tools that we've built. And so very early in kind of the story of ChatGPT and so on, we were figuring out how to have it be able to use the tools that we built, just like humans can use the tools that we built, to solve computational problems, to actually get sort of accurate knowledge about the world and so on. There's all these different possibilities out there. But our kind of challenge is to decide in which direction we want to go and then to let our automated systems pursue those particular directions.”

www.stephenwolfram.com
www.wolfram.com
www.wolframalpha.com
www.wolframscience.com/nks/
www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20
www.wolframphysics.org
www.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

14 Jun 2024How will AI Affect Education, the Arts & Society? - Highlights - STEPHEN WOLFRAM00:12:31

"Nobody, including people who worked on ChatGPT, really sort of expected this to work. It's something that we just didn't know scientifically what it would take to make something that was a fluent producer of human language. I think the big discovery is that this thing that has been sort of a proud achievement of our species, human language, is perhaps not as complicated as we thought it was. It's something that is more accessible to sort of simpler automation than we expected. And so, people have been asking me, when ChatGPT had come out, we were doing a bunch of things technologically around ChatGPT because kind of what, when ChatGPT is kind of stringing words together to make sentences, what does it do when it has to actually solve a computational problem? That's not what it does itself. It's a thing for stringing words together to make text. And so, how does it solve a computational problem? Well, like humans, the best way for it to do it is to use tools, and the best tool for many kinds of computational problems is tools that we've built. And so very early in kind of the story of ChatGPT and so on, we were figuring out how to have it be able to use the tools that we built, just like humans can use the tools that we built, to solve computational problems, to actually get sort of accurate knowledge about the world and so on. There's all these different possibilities out there. But our kind of challenge is to decide in which direction we want to go and then to let our automated systems pursue those particular directions.”

Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business.

www.stephenwolfram.com
www.wolfram.com
www.wolframalpha.com
www.wolframscience.com/nks/
www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20
www.wolframphysics.org
www.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

18 Jun 2024On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE00:54:54

How do we fight for truth and protect democracy in a post-truth world? How does bias affect our understanding of facts?

Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.

“When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall  and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”

https://leemcintyrebooks.com
www.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyre
https://mitpress.mit.edu/9780262545051/
https://leemcintyrebooks.com/books/the-art-of-good-and-evil/
https://leemcintyrebooks.com/books/the-sin-eater/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

18 Jun 2024How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE00:12:11

“When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall  and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”

Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.

https://leemcintyrebooks.com
www.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyre
https://mitpress.mit.edu/9780262545051/
https://leemcintyrebooks.com/books/the-art-of-good-and-evil/
https://leemcintyrebooks.com/books/the-sin-eater/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

29 Jun 2024How is AI Changing Our Perception of Reality, Creativity & Human Connection? w/ HENRY AJDER - AI Advisor00:53:09

How is artificial intelligence redefining our perception of reality and truth? Can AI be creative? And how is it changing art and innovation? Does AI-generated perfection detach us from reality and genuine human connection?

Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.

“Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool.”
www.henryajder.com
www.bbc.co.uk/programmes/m0017cgr

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

29 Jun 2024Does AI-generated Perfection Detach Us from Reality, Life & Human Connection? - Highlights - HENRY AJDER00:12:09

“Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool.”

Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.
www.henryajder.com
www.bbc.co.uk/programmes/m0017cgr

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

05 Jul 2024DIANE VON FÜRSTENBERG: Woman in Charge & How AI Will Change Storytelling w/ Oscar-winning Director SHARMEEN OBAID-CHINOY00:10:29

Sharmeen Obaid-Chinoy is an Oscar and Emmy award-winning Canadian-Pakistani filmmaker whose work highlights extraordinary women and their stories. She earned her first Academy Award in 2012 for her documentary Saving Face, about the Pakistani women targeted by brutal acid attacks. Today, Obaid-Chinoy is the first female film director to have won two Oscars by the age of 37. In 2023, it was announced that Obaid-Chinoy will direct the next Star Wars film starring Daisy Ridley. Her most recent project, co-directed alongside Trish Dalton, is the new documentary Diane von Fürstenberg: Woman in Charge, about the trailblazing Belgian fashion designer who invented the wrap dress 50 years ago. The film had its world premiere as the opening night selection at the 2024 Tribeca Festival on June 5th and premiered on June 25th on Hulu in the U.S. and Disney+ internationally. A product of Obaid-Chinoy's incredibly talented female filmmaking team, Woman in Charge provides an intimate look into Diane von Fürstenberg’s life and accomplishments and chronicles the trajectory of her signature dress from an innovative fashion statement to a powerful symbol of feminism.

“I think it's very early for us to see how AI is going to impact us all, especially documentary filmmakers. And so I embrace technology, and I encourage everyone as filmmakers to do so. We're looking at how AI is facilitating filmmakers to tell stories, create more visual worlds. I think that right now we're in the play phase of AI, where there's a lot of new tools and you're playing in a sandbox with them to see how they will develop.

I don't think that AI has developed to the extent that it is in some way dramatically changing the film industry as we speak, but in the next two years, it will. We have yet to see how it will. As someone who creates films, I always experiment, and then I see what it is that I'd like to take from that technology as I move forward.”

www.hulu.com/movie/diane-von-furstenberg-woman-in-charge-95fb421e-b7b1-4bfc-9bbf-ea666dba0b02
https://www.disneyplus.com/movies/diane-von-furstenberg-woman-in-charge/1jrpX9AhsaJ6
https://socfilms.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

15 Jul 2024The Future of Energy - RICHARD BLACK - Director, Policy & Strategy, Ember - Fmr. BBC Environment Correspondent00:13:02

Richard Black spent 15 years as a science and environment correspondent for the BBC World Service and BBC News, before setting up the Energy & Climate Intelligence Unit. He now lives in Berlin and is the Director of Policy and Strategy at the global clean energy think tank Ember, which aims to accelerate the clean energy transition with data and policy. He is the author of The Future of Energy; Denied:The Rise and Fall of Climate Contrarianism, and is an Honorary Research Fellow at Imperial College London.

"I guess no one needs AI in the same way that we need oil or food. So, from that point of view, it's a lot easier. AI is fascinating, slightly scary. I find that the amount of discussion of setting it off in a carefully thought through direction is way lower than the amount of fascination with the latest thing that it can do. Often fiction should be our guide to these things or can be a valuable guide to these things. And if we go back to Isaac Asimov and his three laws of robotics, and to all these three very fundamental points that he said should be embedded in all automata, there's no discussion of that around AI, like none. I personally find that quite a hole in the discourse that we're having.”

https://mhpbooks.com/books/the-future-of-energy
https://ember-climate.org/about/people/richard-black
https://ember-climate.org
www.therealpress.co.uk/?s=Richard+black

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

19 Jul 2024AI’s Role in Society, Culture & Climate with CHARLIE HERTZOG YOUNG00:07:02

The planet’s well-being unites us all, from ecosystems to societies, global systems to individual health. How is planetary health linked to mental health?

Charlie Hertzog Young is a researcher, writer and award-winning activist. He identifies as a “proudly mad bipolar double amputee” and has worked for the New Economics Foundation, the Royal Society of Arts, the Good Law Project, the Four Day Week Campaign and the Centre for Progressive Change, as well as the UK Labour Party under three consecutive leaders. Charlie has spoken at the LSE, the UN and the World Economic Forum. He studied at Harvard, SOAS and Schumacher College and has written for The Ecologist, The Independent, Novara Media, Open Democracy and The Guardian. He is the author of Spinning Out: Climate Change, Mental Health and Fighting for a Better Future.

https://charliehertzogyoung.me
https://footnotepress.com/books/spinning-out/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

25 Jul 2024Utopia in the Age of Survival with S. D. CHROSTOWSKA00:44:50

As Surrealism turns 100, what can it teach us about the importance of dreaming and creating a better society? Will we wake up from the consumerist dream sold to us by capitalism and how would that change our ideas of utopia?

S. D. Chrostowska is professor of humanities at York University, Canada. She is the author of several books, among them Permission, The Eyelid, A Cage for Every Child, and, most recently, Utopia in the Age of Survival: Between Myth and Politics. Her essays have appeared in such venues as Public Culture, Telos, Boundary 2, and The Hedgehog Review. She also coedits the French surrealist review Alcheringa and is curator of the 19th International Exhibition of Surrealism, Marvellous Utopia, which runs from July to September 2024 in Saint-Cirq-Lapopie, France.

“There’s the existing AI and the dream of artificial general intelligence that is aligned with our values and will make our lives better. Certainly, the techno-utopian dream is that it will lead us towards utopia. It is the means of organizing human collectivities, human societies, in a way that would reconcile all the variables, all the things that we can't reconcile because we don't have enough of a fine-grained understanding of how people interact, the different motivations of their psychologies and of societies, of groups, of people. Of course, that's another kind of psychology that we're talking about. So I think the dream of AI is a utopian dream that stands correcting, but it is itself being corrected by those who are the curators of that technology. Now you asked me about the changing role of artists in this landscape. I would say, first of all, that I'm for virtuosity. And this makes me think of AI and a higher level AI, it would be virtuous before it becomes super intelligence.”

https://profiles.laps.yorku.ca/profiles/sylwiac/
www.sup.org/books/title/?id=33445
https://chbooks.com/Books/T/The-Eyelid
https://ciscm.fr/en/merveilleuse-utopie

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

25 Jul 2024AI & How Utopian Visions Shape Our Reality & Future - Highlights - S. D. CHROSTOWSKA00:20:46

“There’s the existing AI and the dream of artificial general intelligence that is aligned with our values and will make our lives better. Certainly, the techno-utopian dream is that it will lead us towards utopia. It is the means of organizing human collectivities, human societies, in a way that would reconcile all the variables, all the things that we can't reconcile because we don't have enough of a fine-grained understanding of how people interact, the different motivations of their psychologies and of societies, of groups, of people. Of course, that's another kind of psychology that we're talking about. So I think the dream of AI is a utopian dream that stands correcting, but it is itself being corrected by those who are the curators of that technology. Now you asked me about the changing role of artists in this landscape. I would say, first of all, that I'm for virtuosity. And this makes me think of AI and a higher level AI, it would be virtuous before it becomes super intelligence.”

S. D. Chrostowska is professor of humanities at York University, Canada. She is the author of several books, among them Permission, The Eyelid, A Cage for Every Child, and, most recently, Utopia in the Age of Survival: Between Myth and Politics. Her essays have appeared in such venues as Public Culture, Telos, Boundary 2, and The Hedgehog Review. She also coedits the French surrealist review Alcheringa and is curator of the 19th International Exhibition of Surrealism, Marvellous Utopia, which runs from July to September 2024 in Saint-Cirq-Lapopie, France.

https://profiles.laps.yorku.ca/profiles/sylwiac/

www.sup.org/books/title/?id=33445
https://chbooks.com/Books/T/The-Eyelid
https://ciscm.fr/en/merveilleuse-utopie

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

31 Jul 2024The SDGs, AI & UN Summit of the Future - GUILLAUME LAFORTUNE - VP, UN SDSN, Paris00:15:09

“The SDSN has been set up to mobilize research and science for the Sustainable Development Goals. Each year, we aim to provide a fair and accurate assessment of countries' progress on the 17 Sustainable Development Goals. The development goals were adopted back in 2015 by all UN member states, marking the first time in human history that we have a common goal for the entire world. Our goal each year with the SDG index is to have sound methodologies and translate these into actionable insights that can generate impactful results at the end of the day. Out of all the targets that we track, only 16 percent are estimated to be on track. This agenda not only combines environmental development but also social development, economic development, and good governance. Currently, none of the SDGs are on track to be achieved at the global level.”

In today's podcast, we talk with Guillaume Lafortune, Vice President and Head of the Paris Office of the UN Sustainable Development Solutions Network (SDSN), the largest global network of scientists and practitioners dedicated to implementing the Sustainable Development Goals (SDGs). We discuss the intersections of sustainability, global progress, the UN Summit of the Future, and the daunting challenges we face. From the impact of war on climate initiatives to transforming data into narratives that drive change, we explore how global cooperation, education, and technology pave the way for a sustainable future and look at the lessons of history and the power of diplomacy in shaping our path forward.

Guillaume Lafortune joined SDSN in 2017 to lead work on SDG data, policies, and financing including the preparation of the annual Sustainable Development Report (which includes the SDG Index and Dashboards). Between 2020 and 2022 Guillaume was a member of The Lancet Commission on COVID-19, where he coordinated the taskforces on “Fiscal Policy and Financial Markets” and “Green Recovery”, and co-authored the final report of the Commission. Guillaume is also a member of the Grenoble Center for Economic Research (CREG) at the Grenoble Alpes University. Previously, he served as an economist at the OECD in Paris and at the Ministry of Economic Development in the Government of Quebec (Canada). Guillaume is the author of 50+ scientific publications, book chapters, policy briefs and international reports on sustainable development, economic policy and good governance.

SDSN's Summit of the Future Recommendations
SDG Transformation Center
SDSN Global Commission for Urban SDG Finance

www.creativeprocess.info
www.oneplanetpodcast.org
IG
www.instagram.com/creativeprocesspodcast

06 Aug 2024Is AI capable of creating a protest song that disrupts oppression & inspires social change? - JAKE FERGUSON, ANTHONY JOSEPH & JERMAIN JACKMAN00:14:58

“There's something raw about The Architecture of Oppression, both part one and part two. There's a raw realness and authenticity in those songs that AI can't create. There's a lived experience that AI won't understand, and there's a feeling in those songs. And it's not just in the words from the spoken word artists, if it's not in the instruments that are being played. It's in the voice that you hear. You hear the pain, you hear the struggle, you hear the joy, you hear all of those emotions in all of those songs. And that's something that AI can't make up or create.”

Jake Ferguson is an award-winning musician known for his work with The Heliocentrics and as a solo artist under the name The Brkn Record. Alongside legendary drummer Malcolm Catto, Ferguson has composed two film scores and over 10 albums, collaborating with icons like Archie Shepp, Mulatu Astatke, and Melvin Van Peebles. His latest album is The Architecture of Oppression Part 2. The album also features singer and political activist Jermain Jackman, a former winner of The Voice (2014) and the T.S. Eliot Prize winning poet and musician, Anthony Joseph.

“I think as humans, we forget. We are often limited by our own stereotypes, and we don't see that in everyone there's the potential for beauty and love and all these things. And I think The Architecture of Oppression, both parts one and two, are really a reflection of all the community and civil rights work that I've been doing for the same amount of time, really - 25 years. And I wanted to try and mix my day job and my music side, so bringing those two sides of my life together. I wanted to create a platform for black artists, black singers, and poets who I really admire. Jermain is somebody I've worked with for probably about six, seven years now. He's also in the trenches of the black civil rights struggle. We worked together on a number of projects, but it was very interesting to then work with Jemain in a purely artistic capacity. And it was a no-brainer to give Anthony a call for this second album because I know of his pedigree, and he's much more able to put ideas and thoughts on paper than I would be able to.”

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

15 Aug 2024AI, Populism & Consumer Society with Historian FRANK TRENTMANN00:15:50

“The bridge between Out of the Darkness and my previous work, which looked at the transformation of consumer culture in the world, is morality. One thing that became clear in writing Empire of Things was that there's virtually no time or place in history where consumption isn't heavily moralized. Our lifestyle is treated as a mirror of our virtue and sins. And in the course of modern history, there's been a remarkable moral shift in the way that consumption used to be seen as something that led you astray or undermined authority, status, gender roles, and wasted money, to a source of growth, a source of self, fashioning the way we create our own identity. In the last few years, the environmental crisis has led to new questions about whether consumption is good or bad. And in 2015, during the refugee crisis when Germany took in almost a million refugees, morality became a very powerful way in which Germans talked about themselves as humanitarian world champions, as one politician called it. I realized that there's many other topics from family, work, to saving the environment, and of course, with regard to the German responsibility for the Holocaust and the war of extermination where German public discourse is heavily moralistic, so I became interested in charting that historical process."

What can we learn from Germany's postwar transformation to help us address today's environmental and humanitarian crises? With the rise of populism, authoritarianism, and digital propaganda, how can history provide insights into the challenges of modern democracy?

Frank Trentmann is a Professor of History at Birkbeck, University of London, and at the University of Helsinki. He is a prize-winning historian, having received awards such as the Whitfield Prize, Austrian Wissenschaftsbuch/Science Book Prize, Humboldt Prize for Research, and the 2023 Bochum Historians' Award. He has also been named a Moore Scholar at Caltech. He is the author of Empire of Things and Free Trade Nation. His latest book is Out of the Darkness: The Germans 1942 to 2022, which explores Germany's transformation after the Second World War.

www.bbk.ac.uk/our-staff/profile/8009279/frank-trentmann
www.penguin.co.uk/authors/32274/frank-trentmann?tab=penguin-books

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

26 Aug 2024AI, Cognitive Bias & the Future of Journalism w/ Pulitzer Prize-winning Journalist NICHOLAS KRISTOF00:16:26

“There have been some alarming experiments that show AI arguments are better at persuading people than humans are at persuading people. I think that's partly because humans tend to make the arguments that we ourselves find most persuasive. For example, a liberal will make the arguments that will appeal to liberals, but the person you're probably trying to persuade is somebody in the center. We're just not good at putting ourselves in other people's shoes. That's something I try very hard to do in the column, but I often fall short. And with AI, I think people are going to become more vulnerable to being manipulated. I think we're at risk of being manipulated by our own cognitive biases and the tendency to reach out for information sources that will confirm our prejudices. Years ago, the theorist Nicholas Negroponte wrote that the internet was going to bring a product he called the Daily Me—basically information perfectly targeted to our own brains—and that's kind of what we've gotten now. A conservative will get conservative sources that show how awful Democrats are and will have information that buttresses that point of view, while liberals will get the liberal version of that. So, I think we have to try to understand those cognitive biases and understand the degree to which we are all vulnerable to being fooled by selection bias. I'd like to see high schools, in particular, have more information training and media literacy programs so that younger people can learn that there are some news sources that are a little better than others and that just because you see something on Facebook doesn't make it true."

Nicholas D. Kristof is a two-time Pulitzer-winning journalist and Op-ed columnist for The New York Times, where he was previously bureau chief in Hong Kong, Beijing, and Tokyo. Kristof is a regular CNN contributor and has covered, among many other events and crises, the Tiananmen Square protests, the Darfur genocide, the Yemeni civil war, and the U.S. opioid crisis. He is the author of the memoir Chasing Hope, A Reporter's Life, and coauthor, with his wife, Sheryl WuDunn, of five previous books: Tightrope, A Path Appears, Half the Sky, Thunder from the East, and China Wakes.

www.nytimes.com/column/nicholas-kristof
www.penguinrandomhouse.com/books/720814/chasing-hope-by-nicholas-d-kristof

Family vineyard & apple orchard in Yamhill, Oregon: www.kristoffarms.com

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

04 Sep 2024AI, Curiosity, Cognition & Creativity with Neuroscientist DR. JACQUELINE GOTTLIEB00:21:47

“We have an onslaught of information the moment we open our eyes. We evolved to deal with an onslaught of information, and we are masters at focusing and ignoring vast amounts of information. Now, AI in this digital age is a relatively new stream of information, which is man-made, so we make it more salient.  So, yes, it's harder to ignore it, but people can learn to ignore it, and indeed, it's a learning process. I think it will also require learning how to teach our children. I mean, we're raising generations of kids who will take AI and the digital world as a given. To them, it will be no different than a chair and a table were to us. So they will learn to not be so distracted by chairs and tables.”

Dr. Jacqueline Gottlieb is a Professor of Neuroscience and Principal Investigator at Columbia University’s Zuckerman Mind Brain Behavior Institute. Dr. Gottlieb studies the mechanisms that underlie the brain's higher cognitive functions, including decision making, memory, and attention. Her interest is in how the brain gathers the evidence it needs—and ignores what it doesn’t—during everyday tasks and during special states such as curiosity.

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

11 Sep 2024AI, Tech & The Future of Museums - STEPHEN REILY, Founding Director of Remuseum on Transforming Cultural Spaces00:16:06

“The opportunity is that we have never had a public that is more passionate and obsessed with visual imagery. If the owners of the best original imagery in the world can't figure out how to take advantage of the fact that the world has now become obsessed with these treasures that we have to offer as museums, then shame on us. This is the opportunity to say, if you're spending all day scrolling on Instagram looking for amazing imagery, come and see the original source. Come and see the real work. Let us figure out how to make that connection.”

Stephen Reily is the Founding Director of Remuseum, an independent research project housed at Crystal Bridges Museum of American Art in Bentonville, Arkansas. Funded by arts patron David Booth with additional support by the Ford Foundation, Remuseum focuses on advancing relevance and governance in museums across the U.S. He works with museums to create a financially sustainable strategy that is human-focused, centering on inclusion, diversity, and important causes like climate change. During his time as director of the Speed Art Museum in Louisville, KY, Reily presented Promise, Witness, Remembrance, an exhibition in response to the killing of Breonna Taylor and a year of protests in Louisville. In 2022, he co-wrote a book documenting the exhibition. As an active civic leader, Reily has been a part of numerous community organizations and boards, like the Reily Reentry Project, supporting expungement programs for Kentucky citizens, Creative Capital, offering grants for the arts, and founded Seed Capital Kentucky, a non-profit that aims to improve the food economy in the area.A Yale and Stanford Law graduate, Reily clerked for U.S. Supreme Court Justice John Paul Stevens before launching a successful entrepreneurial career, experiences he draws upon for public engagement initiatives.
https://remuseum.org
https://crystalbridges.org
www.stephenreily.com
www.kentuckypress.com/9781734248517/promise-witness-remembrance
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

21 Sep 2024Can Design Save the World? - SCOTT DOORLEY & CARISSA CARTER - Co-authors of Assembling Tomorrow - Directors of Stanford’s d.School00:57:06

How can we design and adapt for the uncertainties of the 21st century? How do emotions shape our decisions and the way we design the world around us?

Scott Doorley is the Creative Director at Stanford's d. school and co author of Make Space. He teaches design communication and his work has been featured in museums and architecture and urbanism and the New York Times. Carissa Carteris the Academic Director at Stanford's d. schooland author of The Secret Language of Maps. She teaches courses on emerging technologies and data visualization and received Fast Company and Core 77 awards for her work on designing with machine learning and blockchain.  Together, they co authored Assembling Tomorrow: A Guide to Designing a Thriving Future.

“The way we understand the world and how the world actually works is just not mapped perfectly. That kind of leads to problems because we don't know exactly what we're doing in the world. We can't see all the repercussions of the things we create until later on. One silver lining about the technologies we're creating is that technologies like AI could be used to help us with this issue, with the fact that our mental models aren't exactly in line with how the world works. AI is actually very good at predicting and modeling outcomes. It could be used to understand climate change better so that we're able to understand it in a way that allows us to act. It could also help us predict the impacts of the things that we're making. So there's a bit of a silver lining in here, even though it can feel scary to be in a situation where your mental model and how the world works are not in line.”

“I worry that AI is changing my thoughts and can control my thoughts, and that used to sound really far-fetched and now seems sort of middle of the road. I guarantee in a year's time that will sound like a very normal concern. Social listening is very sophisticated. All of the data in the websites that we visit, the data trails that we leave out in the world, are tracking us—our locations, our behaviors, and our habits such that there are many sites out there that can predict exactly what we're thinking and feeling and feed us advertising content or things that aren't even advertising content that can change what our next behaviors are. I think that's getting more and more sophisticated. We have already seen our political elections affected by mass attacks on our social media. When that comes down to our individual agency and behavior, I think that's something we do need to be concerned about. The way that we as individuals can combat it is to be aware that it's happening. Really start to notice the unnoticed, and I still feel optimistic amongst this concern.”

www.scottdoorley.com
www.snowflyzone.com
https://dschool.stanford.edu/
www.penguinrandomhouse.com/books/623529/assembling-tomorrow-by-scott-doorley-carissa-carter-and-stanford-dschool-illustrations-by-armando-veve/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Image credit: Patrick Beaudouin

21 Sep 2024What is good design? How AI is Shaping OurWorld? - SCOTT DOORLEY & CARISSA CARTER - Co-authors of Assembling Tomorrow - Highlights00:21:08

“The way we understand the world and how the world actually works is just not mapped perfectly. That kind of leads to problems because we don't know exactly what we're doing in the world. We can't see all the repercussions of the things we create until later on. One silver lining about the technologies we're creating is that technologies like AI could be used to help us with this issue, with the fact that our mental models aren't exactly in line with how the world works. AI is actually very good at predicting and modeling outcomes. It could be used to understand climate change better so that we're able to understand it in a way that allows us to act. It could also help us predict the impacts of the things that we're making. So there's a bit of a silver lining in here, even though it can feel scary to be in a situation where your mental model and how the world works are not in line.”

“I worry that AI is changing my thoughts and can control my thoughts, and that used to sound really far-fetched and now seems sort of middle of the road. I guarantee in a year's time that will sound like a very normal concern. Social listening is very sophisticated. All of the data in the websites that we visit, the data trails that we leave out in the world, are tracking us—our locations, our behaviors, and our habits such that there are many sites out there that can predict exactly what we're thinking and feeling and feed us advertising content or things that aren't even advertising content that can change what our next behaviors are. I think that's getting more and more sophisticated. We have already seen our political elections affected by mass attacks on our social media. When that comes down to our individual agency and behavior, I think that's something we do need to be concerned about. The way that we as individuals can combat it is to be aware that it's happening. Really start to notice the unnoticed, and I still feel optimistic amongst this concern.”

Scott Doorley is the Creative Director at Stanford's d. school and co author of Make Space. He teaches design communication and his work has been featured in museums and architecture and urbanism and the New York Times. Carissa Carteris the Academic Director at Stanford's d. schooland author of The Secret Language of Maps. She teaches courses on emerging technologies and data visualization and received Fast Company and Core 77 awards for her work on designing with machine learning and blockchain.  Together, they co authored Assembling Tomorrow: A Guide to Designing a Thriving Future.

www.scottdoorley.com
www.snowflyzone.com
https://dschool.stanford.edu/
www.penguinrandomhouse.com/books/623529/assembling-tomorrow-by-scott-doorley-carissa-carter-and-stanford-dschool-illustrations-by-armando-veve/

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Sep 2024The Neuroscience of Creativity with DR. BEN SHOFTY00:49:09

Where do creative thoughts come from? How can we harness our stream of consciousness and spontaneity to express ourselves? How are mind-wandering, meditation, and the arts good for our creativity and physical and mental well-being?

Dr. Ben Shofty is a functional neurosurgeon affiliated with the University of Utah. He graduated from the Tel-Aviv University Faculty of Medicine, received his PhD in neurosurgical training from the Israeli Institute of Technology, and completed his training at the Tel Aviv Medical Center and Baylor University. He was also an Israeli national rugby player. His practice specializes in neuromodulation and exploring treatments for disorders such as OCD, depression, and epilepsy, among others, while also seeking to understand the science behind creativity, mind-wandering, and the many complexities of the brain.
“I'm one of the people who believe that anything that we as human beings can imagine will eventually happen. So, if somebody has raised the question possibility of having brain implants that augment the brain and generate additional functions, I feel like it will eventually happen. There are a lot of private companies, like Elon Musk's Neuralink and others, that are busy designing these interfaces and planning these devices. Of course, nothing is available or even close to completion right now. The next step, of course, would be to modulate them. Just like any other thing in medicine, it will start or has already started with pathological states which we've talked about and people looking for potential interventions through TMS (transcranial magnetic stimulation). It doesn't necessarily have to be invasive, but of course the next step, especially when we're talking about the brain is to intervene and generate additional functions or to improve the way the brain functions. Many people are working on trying to generate memory augmentation, navigation augmentations, and a lot of other functions. I assume eventually it will reach a point where we'll be able to pick and choose what we want to augment about our own brains.  I assume that the technology will be there eventually. And this is something that will be a part of the natural evolution of the human race.”

https://healthcare.utah.edu/find-a-doctor/ben-shofty
https://academic.oup.com/brain/advance-article/doi/10.1093/brain/awae199/7695856

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

27 Sep 2024Neuroscience, AI & The Future of Humanity - DR. BEN SHOFTY - Highlights00:17:34

“I'm one of the people who believe that anything that we as human beings can imagine will eventually happen. So, if somebody has raised the question possibility of having brain implants that augment the brain and generate additional functions, I feel like it will eventually happen. There are a lot of private companies, like Elon Musk's Neuralink and others, that are busy designing these interfaces and planning these devices. Of course, nothing is available or even close to completion right now. The next step, of course, would be to modulate them. Just like any other thing in medicine, it will start or has already started with pathological states which we've talked about and people looking for potential interventions through TMS (transcranial magnetic stimulation). It doesn't necessarily have to be invasive, but of course the next step, especially when we're talking about the brain is to intervene and generate additional functions or to improve the way the brain functions. Many people are working on trying to generate memory augmentation, navigation augmentations, and a lot of other functions. I assume eventually it will reach a point where we'll be able to pick and choose what we want to augment about our own brains.  I assume that the technology will be there eventually. And this is something that will be a part of the natural evolution of the human race.”

Dr. Ben Shofty is a functional neurosurgeon affiliated with the University of Utah. He graduated from the Tel-Aviv University Faculty of Medicine, received his PhD in neurosurgical training from the Israeli Institute of Technology, and completed his training at the Tel Aviv Medical Center and Baylor University. He was also an Israeli national rugby player. His practice specializes in neuromodulation and exploring treatments for disorders such as OCD, depression, and epilepsy, among others, while also seeking to understand the science behind creativity, mind-wandering, and the many complexities of the brain.

https://healthcare.utah.edu/find-a-doctor/ben-shofty
https://academic.oup.com/brain/advance-article/doi/10.1093/brain/awae199/7695856

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

03 Oct 2024The 15-Minute City: A Solution to Saving Our Time & Our Planet with CARLOS MORENO00:38:28

How can the 15-minute city model revolutionize urban living, enhance wellbeing, and reduce our carbon footprint? Online shopping is turning cities into ghost towns. We can now buy anything anywhere anytime. How can we learn to stop scrolling and start strolling and create more livable, sustainable communities we are happy to call home.

Carlos Moreno was born in Colombia in 1959 and moved to France at the age of 20. He is known for his influential "15-Minute City" concept, embraced by Paris Mayor Anne Hidalgo and leading cities around the world. Scientific Director of the "Entrepreneurship - Territory - Innovation" Chair at the Paris Sorbonne Business School, he is an international expert of the Human Smart City, and a Knight of the French Legion of Honour. He is recipient of the Obel Award and the UN-Habitat Scroll of Honour. His latest book is The 15-Minute City: A Solution to Saving Our Time and Our Planet.

“This is the difference between a technological smart city and a real human smart city towards a 15-minute city as the expression of a human-centered urban approach. This is our challenge for the next decades and our target, to humanize our cities. The Olympic Games in Paris have shown the world that it is possible to recreate, to regenerate a really vibrant city with harmonious life between districts, different places, the role of the Seine River as nature in the presence of a lot of people for having more real livability and not an illusory computer life driven by social networks.”

https://www.moreno-web.net/
https://www.wiley.com/en-us/The+15-Minute+City%3A+A+Solution+to+Saving+Our+Time+and+Our+Planet-p-9781394228140

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

03 Oct 2024The Human Smart City: Balancing Ecology & Economy with CARLOS MORENO - Highlights00:14:22

“This is the difference between a technological smart city and a real human smart city towards a 15-minute city as the expression of a human-centered urban approach. This is our challenge for the next decades and our target, to humanize our cities. The Olympic Games in Paris have shown the world that it is possible to recreate, to regenerate a really vibrant city with harmonious life between districts, different places, the role of the Seine River as nature in the presence of a lot of people for having more real livability and not an illusory computer life driven by social networks.”

Carlos Moreno was born in Colombia in 1959 and moved to France at the age of 20. He is known for his influential "15-Minute City" concept, embraced by Paris Mayor Anne Hidalgo and leading cities around the world. Scientific Director of the "Entrepreneurship - Territory - Innovation" Chair at the Paris Sorbonne Business School, he is an international expert of the Human Smart City, and a Knight of the French Legion of Honour. He is recipient of the Obel Award and the UN-Habitat Scroll of Honour. His latest book is The 15-Minute City: A Solution to Saving Our Time and Our Planet.

https://www.moreno-web.net/
https://www.wiley.com/en-us/The+15-Minute+City%3A+A+Solution+to+Saving+Our+Time+and+Our+Planet-p-9781394228140

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

04 Oct 2024Growth: A Reckoning with Economist DANIEL SUSSKIND00:56:48

How can we look beyond GDP and develop new metrics that balance growth with human flourishing and environmental well-being? How can we be more engaged global citizens? In this age of AI, what does it really mean to be human? And how are our technologies transforming us?

Daniel Susskind is a Research Professor in Economics at King's College London and a Senior Research Associate at the Institute for Ethics in AI at Oxford University. He is the author of A World without Work and co-author of the bestselling The Future of the Professions. Previously, he worked in various roles in the British Government - in the Prime Minister’s Strategy Unit, in the Policy Unit in 10 Downing Street, and in the Cabinet Office. His latest book is Growth: A Reckoning.

“The running theme in all of my work has been technology. The first book that I co-authored with my dad was published in 2015. The second book I wrote was A World Without Work: Technology, Automation, and How We Should Respond, published in 2020, just before the pandemic began. My new book Growth: A Reckoning is about growth, but also technological progress, because what drives growth is technological progress—we have a choice to change the nature of growth, and the same is true of our technological progress. To reach a dynamic economy capable of generating ever more ideas about the world, we need to use the technologies we have to generate new ideas about the world. One of the technologies I've been particularly excited by was AlphaFold, developed by DeepMind to solve protein folding problems in biology. Essentially, understanding the 3D shape of proteins is important for understanding disease and designing effective treatment, but incredibly difficult to figure out, and Alpha fold has solved this problem by providing the 3D structures of millions of proteins. As the only economist in The Institute for Ethics in AI, I’ve always found the moral, ethical side of technology interesting. I often get asked, “What can machines do, and what can they not do?” But I think one of the most troubling, but also one of the most fascinating things about technology is it is forcing us to ask the question “What does it really mean to be human? What is humanity?” For a long time, many people thought the core of what it means to be a human being is to be a creative thing. But with the arrival of generative AI in the last few years, I think that that has been really called into question. These AI systems are particularly good at creative tasks—coming up with original, novel text, images, and video. In fact, I actually use these AI systems to generate bedtime stories with my children—getting the kids to craft a good prompt is quite a fun, intellectually demanding exercise, and these technologies now give my children a storytelling capability that would have been unimaginable only a few years ago. So, one of the interesting philosophical consequences of technologies is that it's challenging some of the complacency and deep-rooted assumptions about what it really means to be a human being.”

www.danielsusskind.com
www.penguin.co.uk/books/446381/growth-by-susskind-daniel/9780241542309

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

04 Oct 2024AI, Technological Progress & the Growth Dilemma w/ Economist DANIEL SUSSKIND - Highlights00:13:06

“The running theme in all of my work has been technology. The first book that I co-authored with my dad was published in 2015. The second book I wrote was A World Without Work: Technology, Automation, and How We Should Respond, published in 2020, just before the pandemic began. My new book Growth: A Reckoning is about growth, but also technological progress, because what drives growth is technological progress—we have a choice to change the nature of growth, and the same is true of our technological progress. To reach a dynamic economy capable of generating ever more ideas about the world, we need to use the technologies we have to generate new ideas about the world. One of the technologies I've been particularly excited by was AlphaFold, developed by DeepMind to solve protein folding problems in biology. Essentially, understanding the 3D shape of proteins is important for understanding disease and designing effective treatment, but incredibly difficult to figure out, and Alpha fold has solved this problem by providing the 3D structures of millions of proteins. As the only economist in The Institute for Ethics in AI, I’ve always found the moral, ethical side of technology interesting. I often get asked, “What can machines do, and what can they not do?” But I think one of the most troubling, but also one of the most fascinating things about technology is it is forcing us to ask the question “What does it really mean to be human? What is humanity?” For a long time, many people thought the core of what it means to be a human being is to be a creative thing. But with the arrival of generative AI in the last few years, I think that that has been really called into question. These AI systems are particularly good at creative tasks—coming up with original, novel text, images, and video. In fact, I actually use these AI systems to generate bedtime stories with my children—getting the kids to craft a good prompt is quite a fun, intellectually demanding exercise, and these technologies now give my children a storytelling capability that would have been unimaginable only a few years ago. So, one of the interesting philosophical consequences of technologies is that it's challenging some of the complacency and deep-rooted assumptions about what it really means to be a human being.”

Daniel Susskind is a Research Professor in Economics at King's College London and a Senior Research Associate at the Institute for Ethics in AI at Oxford University. He is the author of A World without Work and co-author of the bestselling The Future of the Professions. Previously, he worked in various roles in the British Government - in the Prime Minister’s Strategy Unit, in the Policy Unit in 10 Downing Street, and in the Cabinet Office. His latest book is Growth: A Reckoning.

www.danielsusskind.com
www.penguin.co.uk/books/446381/growth-by-susskind-daniel/9780241542309

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

28 Oct 2024AI & The Pathway to Flow with Neuroscientist, Fmr. Dancer DR. JULIA CHRISTENSEN00:16:52

“So, syncopation is now the big thing. It will induce people to groove and to like your music more. So let's have a lot of syncopation inside your music and you'll sell a lot. By chasing superficial beauty, which is what AI gives us at the moment, it aims for perfect outcomes. Not that anything these models produce is perfect, because how do you evaluate perfection? But they are based on the data that most people want to see again. That's extremely important to bear in mind. When you say 'cluttered mind,' it's actually also a cluttered brain in terms of the neurotransmitters out and about. As we strive for that perfect coding and external beauty, our brain releases dopamine signals. Dopamine is good; it's a learning signal to the brain, but we need to know how to use it. Constantly swiping our phone and getting this beauty into our brain via our eyes or via the syncopations in the music teaches our mind to seek that all the time because that's a dopamine signal. It's a learning signal. So, striving after these shapes and sound cues repeatedly clutters your brain. That's why your mind is full.”

Dr. Julia F. Christensen is a Danish neuroscientist and former dancer currently working as a senior scientist at the Max Planck Institute for Empirical Aesthetics in Germany. She studied psychology, human evolution, and neuroscience in France, Spain and the UK. For her postdoctoral training, she worked in international, interdisciplinary research labs at University College London, City, University London and the Warburg Institute, London and was awarded a postdoctoral Newton International Fellowship by the British Academy. Her new book The Pathway to Flow is about the science of flow, why our brain needs it and how to create the right habits in our brain to get it.

https://www.linkedin.com/in/dr-julia-f-christensen-36539a144https://www.instagram.com/dr.julia.f.christensen?igsh=cHZkODgxczJqZmxl

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

08 Nov 2024Photographer & Musician JULIAN LENNON on AI, Art, Empathy & Creativity (Copy)00:19:21

“I think a lot of joy comes from helping others. One of the things that I've been really focusing on is finding that balance in life, what’s real and what’s true and what makes you happy. How can you help other people feel the same and have a happier life? I think whatever that takes. So if that's charity, if that's photography, if that's documentary, if that's music, and I can do it, then I'm going to do it.

From traveling, especially in Ethiopia, Kenya, and even South America, we just see these scenarios and situations where they don't have enough support or finances. Anything I’m involved in, a good percentage goes to The White Feather Foundation. From what I witnessed, I just wanted to be able to help. My best teacher ever was Mum because I watched her live through life with dignity, grace, respect, and empathy. To me, those are some of the key things that are most important in living life. I think you have to love everybody and yourself. Respect is a real key issue, not only for people but for this world that we live in, Mother Earth. It's of key importance that we honor and respect this beautiful little blue ball that we live on.”

Julian Lennon is a Grammy-nominated singer-songwriter, photographer, documentary filmmaker, and NYTimes bestselling author of the Touch the Earth children’s book trilogy. This autumn, Whispers – A Julian Lennon Retrospective is being presented at Le Stanze della Fotografia, culminating in the publication of Life’s Fragile Moments, his first photography book. It features a compilation of images that span over two decades of Lennon's unique life, career, adventures, and philanthropy. He founded The White Feather Foundation in 2007, whose key initiatives are education, health, conservation, and the protection of indigenous cultures. He was the executive producer of Kiss the Ground and other environmental documentaries and was named a Peace Laureate by UNESCO in 2020.

www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

Photo credit: © 2024, Julian Lennon. All rights reserved.
Life’s Fragile Moments, published by teNeues, www.teneues.com, August 2024. 27,5 x 34 cm |10 5/6 x 13 3/8 in., 240 pages, Hardcover, approx. 200 color photographs, texts English & German ISBN: 978-3-96171-614-2

01 Dec 2024AI, Technology, Art & Culture - Artists, Philosophers, Economists & Scientists discuss the Future00:11:06

How can we shape technology’s impact on society? How do social media algorithms influence our democratic processes and personal well-being? Can AI truly emulate human creativity? And how will its pursuit of perfection change the art we create?

Daniel Susskind (Economist · Oxford & King’s College London · Author of Growth: A Reckoning · A World Without Work) shares insights on the nature of growth driven by technological progress. He contends that while technology can accelerate growth, its impacts can be consciously directed to reduce environmental damage and social inequalities. According to Susskind, the current trajectory of technological progress needs reevaluation to mitigate potential adverse effects on future working lives.

Arash Abizadeh (Professor of Political Science · McGill University Author of Hobbes and the Two Faces of Ethics · Associate Editor · Free & Equal) explores the ethical tensions between democratic needs and commercial imperatives of social media platforms. He highlights how algorithms designed to maximize engagement often foster outrage and fear, contrasting these commercial objectives with the requirements for a healthy democratic public sphere.

Debora Cahn (Creator & Executive Producer of Netflix’s The Diplomat starring Keri Russell & Rufus Sewell · Exec. Producer Homeland · Grey’s Anatomy · Vinyl · Co-Producer The West Wing) toggles between apprehension and optimism about emerging technologies like AI. She reflects on her father's experience with nuclear technology and ponders the unpredictable impacts of AI, drawing parallels with the unforeseen transformation of the internet.

Julia F. Christensen (Neuroscientist - Author of The Pathway To Flow: The New Science of Harnessing Creativity to Heal and Unwind the Body & Mind) examines the rise of AI and its influence on aesthetics in the arts. She argues that technology drives creators towards superficial beauty conforming to popular standards, thereby cluttering the mind and fostering an obsession with perfection fueled by dopamine signals.

Julian Lennon (Singer-songwriter · Documentary Filmmaker · Founder of The White Feather Foundation Photographer/Author of Life’s Fragile Moments) discusses AI's potential in the medical field, highlighting recent advancements that are paving the way for novel treatments and cures. While acknowledging the importance of copyright issues, he remains optimistic about AI’s positive impact on healthcare.

Brian David Johnson (Author of The Future You: How to Create the Life You Always Wanted · Director of the Arizona State University’s Threatcasting Lab Futurist in Residence · ASU’s Center for Science & the Imagination) emphasizes the importance of maintaining a human-centric approach to technology. He questions the purpose behind technological advancements, urging developers to always consider the human impact and clarify their objectives.

To hear more from each guest, listen to their full interviews.

Episode Website

www.creativeprocess.info/pod

Instagram:@creativeprocesspodcast

13 Dec 2024Elon Musk, Putin's Russia, Murdoch's Fox News: How Billionaires Shape Our World with DARRYL CUNNINGHAM00:42:02

What influence do billionaires have on politics, journalism, and the technology that shapes our lives? What drives people to seek absolute power, and how can we hold them accountable?

Darryl  Cunningham is a cartoonist and author of Science Tales, Psychiatric Tales, The Age of Selfishness, and Billionaires: The Lives of the Rich and Powerful. Cunningham is also well-known for his comic strips, which have been featured on the websites Forbidden Planet and Act-i-vate collective, among others. others. His more recent work includes a graphic novel on Elon Musk, titled Elon Musk: Investigation into a New Master of the World.

“It's far too early to say how AI is going to shake out. A lot of it will come to nothing, like many new technologies. VR came along, and people thought it would be a big thing, but it became a niche for a few kinds of people. AI might find a place ultimately, but it has to come from people. We have to make choices. Will people be happy with processed movies done with a few keywords, or will they want to hear the actual voice of a human being? In the end, it's up to the audience, and that's us. We will shape it.”

Episode Website with Feature Article

www.creativeprocess.info/pod

Instagram:@creativeprocesspodcast

11 Jan 2025The Club of Rome & The Limits to Growth w/ Co-President PAUL SHRIVASTAVA00:44:15

Less than two weeks into the new year and the world’s wealthiest 1% have already used their fair share of the global carbon budget allocated for 2025. 2024 was hottest year on record. How can we change our extractive mindset to a regenerative mindset? How can we evolve our systems from economic growth to a vision of regenerative living and eco-civilization?

Paul Shrivastava is Co-President of The Club of Rome and a Professor of Management and Organisations at Pennsylvania State University. He founded the UNESCO Chair for Arts and Sustainable Enterprise at ICN Business School, Nancy, France, and the ONE Division of the Academy of Management. He was the Executive Director of Future Earth, where he established its secretariat for global environmental change programs, and has published extensively on both sustainable management and crisis management.

“Climate change is here. It's already causing devastation to the most vulnerable populations. We are living with an extractive mindset, where we are extracting one way out of the life system of the Earth. We need to change from that extractive mindset to a regenerative mindset. And we need to change from the North Star of economic growth to a vision of eco civilizations. Those are the two main principles that I want to propose and that the Club of Rome suggests that we try to transform our current organization towards regenerative living and eco civilization.”

Episode Website

www.creativeprocess.info/pod

Instagram:@creativeprocesspodcast

Photo credit: Penn State. Creative Commons

14 Jan 2025AI & The Limits to Growth w/ Co-President of The Club of Rome PAUL SHRIVASTAVA00:15:08

“I think AI is sort of inevitable in some ways. It is not very intelligent right now; it is probably closer to artificial stupidity, but it's a question of time before it becomes smarter and smarter. We need to tackle the right to use question and the value question now as it is developing. It can amplify both the positive possibilities as well as the negative consequences, and we want to make sure that it benefits the largest number of people on Earth.

And systems themselves. Are there guidelines? Are there principles? The Club of Rome group has subgroups who are looking at AI, proposing a constitution for AI, and trying to influence its development, understanding fully well that almost $300 billion has been poured into AI already by the United States venture capital, and it is going to start having impacts. We can't stop it, but while the train is moving, we are trying to make sure some guardrails get into place that everybody plays by. All these transformations cannot be done one by one; they have to happen together in order to have an overall impact, and that is the challenge that not a single organization like the Club of Rome or a university or somebody can accomplish alone. All of us need to get involved.”

Paul Shrivastava is Co-President of The Club of Rome and a Professor of Management and Organisations at Pennsylvania State University. He founded the UNESCO Chair for Arts and Sustainable Enterprise at ICN Business School, Nancy, France, and the ONE Division of the Academy of Management. He was the Executive Director of Future Earth, where he established its secretariat for global environmental change programs, and has published extensively on both sustainable management and crisis management.

Episode Website

www.creativeprocess.info/pod

Instagram:@creativeprocesspodcast

Améliorez votre compréhension de AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews avec My Podcast Data

Chez My Podcast Data, nous nous efforçons de fournir des analyses approfondies et basées sur des données tangibles. Que vous soyez auditeur passionné, créateur de podcast ou un annonceur, les statistiques et analyses détaillées que nous proposons peuvent vous aider à mieux comprendre les performances et les tendances de AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews. De la fréquence des épisodes aux liens partagés en passant par la santé des flux RSS, notre objectif est de vous fournir les connaissances dont vous avez besoin pour vous tenir à jour. Explorez plus d'émissions et découvrez les données qui font avancer l'industrie du podcast.
© My Podcast Data