
AI with AI: Artificial Intelligence with Andy Ilachinski (CNA)
Explore every episode of AI with AI: Artificial Intelligence with Andy Ilachinski
Pub. Date | Title | Duration | |
---|---|---|---|
13 Nov 2020 | The Rosetta Drone | 00:34:25 | |
In COVID-related AI news, MIT researchers have published a machine learning algorithm that can diagnose COVID-19 by the sounds of a person’s forced cough. And the US Veterans Affairs Department rolls out a machine learning tool to predict mortality rates of COVID-19 patients. In non-COVID news, the JAIC releases the Department of Defense’s AI Education Strategy, which contains a detailed description of requirements, required instruction, and competencies. DoD also releases a new electromagnetic spectrum strategy, which contains a number of machine-learning mentions. And Tesla began making available its “full self-driving beta” to a small number of “expert and careful drivers.” Research from MIT CSAIL have created a machine learning system that can reportedly decipher “lost” languages; they built it on several principles from insights into historical linguistics, such as the observation that languages generally only evolve in certain predictable ways (such as sound substitutions). In other language news, Facebook makes available a machine learning model that can translate directly between 100 different languages (rather than using English as a go-between). Research from CalTech and Purdue creates a “Fourier neural operator” that can solve parametric partial differential equations, nearly 1000 times faster than traditional solvers. And research from the University of Waterloo looks at “less than one-“shot learning, attempting to allow an AI to learn with almost no data (and thus recognize more objectives than the number of examples trained on). Click here to visit our website and explore the links mentioned in the episode. | |||
02 Jul 2022 | the sentience of the lamdas | 00:41:02 | |
Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense releasing its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan’s “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems. | |||
10 Jul 2020 | Crime & Publishment | 00:39:15 | |
It’s a week of huge announcements! But first, in COVID-related AI news, Andy and Dave discuss a review paper in Chaos, Solitons, and Fractals that provides a more international focus on the role of AI and ML in COVID research. CSAIL teams with Ava Robotics to design a robot that maneuver between waypoints and disinfect surfaces of warehouses with UV-C light. C3.ai Digital Transformation Institute awards $5.4M to 26 AI researchers for projects related to COVID-19. In non-COVID news, the Association for Computing Machinery calls for the immediate suspension of facial recognition technologies until more mature and reliable. US lawmakers have introduced a bill that would ban police use of facial recognition, while separate bills seek to increase the AI talent available for the Department of Defense, and work to realign and rewire the JAIC within DoD. Over 2300 researchers sign a petition to Springer Nature to reject a publication from Harrisburg University, which developed facial recognition software to predict whether somebody was going to be a criminal. Meanwhile, researchers from Stanford demonstrate the problem of reproducibility by giving a data set of brain scans to 70 different researcher teams; no two teams chose the same workflow to analyze the data, and the final conclusions showed a sizeable variation. In a similar vein, researchers at Duke University examine the historical record of brain scan research and find poor correlation across experiments. In research, the “best paper” for the Conference on Computer Vision and Pattern Recognition goes to a team from Oxford, who use unsupervised learning methods and symmetry to convert single 2D images into 3D models. Researchers at Uber, the University of Toronto, and MIT use 3D simulated worlds to generate synthetic data for training LiDAR systems on self-driving vehicles. Calum MacKellar makes Cyborg Mind available, a look into the future of cyberneuroethics. And Johns Hopkins prepares for a second seminar on Operationalizing AI in Health. Click here to visit our website and explore the links mentioned in the episode. | |||
01 Jan 2021 | The 4-Bit Blopera | 00:35:09 | |
In COVID-related AI news, Andy and Dave discuss the results of the C3.ai COVID-19 challenge. In regular AI news, the US Air Force announces an AI, ARTUm, controlling a military plane for the first time. A Nature publication shows the AI collaboration links between institutions based on the last 5 years. The IBM T.J. Watson Research Center publishes research on 4-bit training of deep neural networks to accelerate the process. Researchers at Oregon State University publish advances with a new type of optical sensor that can naturally detect moving objects. And the Naval Surface Warfare Center at Crane along with ONR announce a prize challenge for AI in Small Unit Maneuver (AISUM). In meta-research, researchers create a graph-based toolkit for analysis and comparison of games. Other research examines the fossil records to discover patterns in Earth’s biological mass extinction events. In the book of the week, the US Army War College Class of 2020 publishes an Estimation of Technological Convergence. György Buzsáki’s The Brain from the Inside Out takes a different look at how the brain functions. And for the holidays, Andy and Dave play around with Google’s blob opera singers. | |||
11 Jun 2021 | Someday My ‘Nets Will Code | 00:45:01 | |
Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20 To RSVP contact Larry Lewis at LewisL@cna.org. Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses an 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine-tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally, Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.” Interview with Diana Gehlhaus: 33:32 Click here to visit our website and explore the links mentioned in the episode. | |||
25 Jun 2021 | Reward of the Coprophages | 00:33:44 | |
Andy and Dave discuss the latest in AI news, including the launch of the National AI Research Resource Task Force, which will serve as a federal advisory committee and produce at least two reports to Congress (a roadmap and implementation plan) by November 2022. Google and Harvard University release a 1.4 PB reconstruction of a cubic millimeter of human brain tissue. Google reports a deep reinforcement-learning system that outperforms humans in designing floorplans for microchips, both in time and inefficiency. Researchers from the UK, Germany, and China fuse electronics to the Madagascar hissing cockroach to create an insect-computer hybrid for autonomous search and rescue. The Navy’s MQ-25 tanker drone refuels a manned aircraft for the first time. Researchers use large-scale experiments and machine learning to discover a greater hierarchy of theories of human decision-making. OpenAI introduces a Process for Adapting Language Models to Society (PALMS) as a way to try to mitigate bias in transformer models such as GPT-3. A concept paper from DeepMind examines why reward systems are enough to constitute a solution to artificial general intelligence. And Richard Sutton and Andrew Barto publish the second edition of Reinforcement Learning: An Introduction. Click here to visit our website and explore the links mentioned in the episode. | |||
16 Jul 2021 | GPT Is My CoPilot | 00:35:26 | |
Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants, using Thor, a 9-kilgram quadrotor drone. A paper in the Journal of American Medical Association examines an early warning system for sepsis, and finds that it misses out on most instances (67%) of cases, and frequently issued false alarms (to which the developer contests the results). A new bill, the Consumer Safety Technology Act, directs the US Consumer Product Safety Commission to run a pilot program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness). In a similar vein, a survey from the Pew Research Center and Elon University’s Imagining the Internet Center found that 68% of respondents (from across 602 leaders in the AI field) believe that AI ethical principles will NOT be employed by most AI systems within the next decade; the survey includes a summary of the respondents’ worries and hopes, as well as some additional commentary. GitHub partners with OpenAI to launch CoPilot, a “Programming Partner” that uses contextual cues to suggest new code. Researchers from Stanford University, UC San Diego, and MIT research Physion, a visual and physical prediction benchmark to measure predictions about commonplace real world physical events (such as when objects: collide, drop, roll, domino, etc). CSET releases a report on Machine Learning and Cybersecurity: Hype and Reality, finding that it is unlikely that machine learning will fundamentally transform cyber defense. Bengio, Lecun, and Hinton join together to pen a white paper on the role of deep learning in AI, not surprisingly eschewing the need for symbolic systems. Aston Zhang and Zack C. Lipton, and Alex J Smola release the latest version of Dive into Deep Learning, now over 1000 pages, and living only as an online version. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
03 Jun 2022 | Top Gan: Swarmaverick | 00:37:26 | |
Andy and Dave discuss the latest in AI news and research, starting with an announcement that DoD will be updating its Directive 3000.09 on “Autonomous Weapons,” with the new Emerging Capabilities Policy Office leading the way [1:25]. The DoD names Diane Staheli as the new chief for Responsible AI [5:19]. NATO launches an AI strategic initiative, Horizon Scanning, to better understand AI and its potential military implications [6:31]. China unveils an autonomous drone carrier ship even though Dave wonders about the use of the terms unmanned and autonomous [8:59]. Stanford University and the Human-Centered AI Center build on their initiative for foundation models by releasing a call to the community for developing norms on the release of foundation models [10:42]. DECIDE-AI continues to develop its reporting guidelines for early-stage clinical evaluation of AI decision support systems [14:39]. The Army successfully demonstrates four waves of seven drones, launched by a single operator, during EDGE 22 [18:31]. Researchers from Zhejiang University and Hong Kong University of S&T demonstrate a swarm of physical micro flying robots, fully autonomous, able to navigate and communicate as a swarm, with fully onboard perception, localization, and control [19:58]. Google Research introduces a new text-to-image generator, Imagen, which uses diffusion models to increase the size and photorealism of an image [24:20]. Researchers discover that an AI algorithm can identify race from X-ray and CT images, even when correcting for variations such as body-mass index but can’t explain why or how [31:21]. And Sonantic uses AI to create the voice lines for Val Kilmer in the new movie Top Gun: Maverick [34:18]. RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7. | |||
11 Feb 2022 | Xenopus in Boots | 00:41:39 | |
Andy and Dave discuss the latest in AI news and research, including a report from the School of Public Health in Boston that shows why most “data for good” initiatives failed to impact the COVID-19 health crisis [0:45]. The Department of Homeland Security tests the use of robot dogs (from Ghost Robotics) for border patrol duties [5:00]. Researchers find that public trust in AI varies greatly depending on its application [7:52]. Researchers from Stanford University and Toyota Research Institute find extensive label and model errors in training data, such as over 70% of validation scenes (for publicly available autonomous vehicle datasets) containing at least one missing object box [12:05]. And principal researchers Josh Bongard and Mike Levin join Andy and Dave for more discussion on the latest Xenobots research [18:21]. Follow the link below to visit our website and explore the links mentioned in this episode. https://www.cna.org/CAAI/audio-video | |||
11 Dec 2020 | Poetein Folding | 00:33:01 | |
In COVID-related AI news, Andy and Dave discuss a Facebook model that provides county-level forecasts on the spread of COVID-19. IN non-COVID AI news, DeepMind’s AlphaFold 2 won the 14th biennial Critical Assessment of Structure Prediction (CASP), scoring above 90 on a global distance test for around two-thirds of the test proteins. Partnership on AI establishes The AI Incident Database (AIID) to provide an open-access resource on failures of AI systems, currently containing over 1,000 publically available “incident reports.” CSET publishes a report on ‘”Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”’ which examines the perspectives of US AI industry professional toward working on Department of Defense funded AI projects. The UN, in conjunction with Trend Micro Research and the European Cybercrime Centre, releases a report on Malicious Uses and Abuses of AI, which highlights the potential physical impacts of hackers on autonomous- and AI-related technologies. And LtGen Michael Groen, the new Director of the Joint AI Center, provides an overview of the JAIC’s goals and objectives. In research, NVidia, Rice University, and Caltech publish the BONGARD-LOGO benchmark set, as an expansion of the Bongard Problems, which provide free-form shape concepts to test context-dependent perception, analogy-making perception, and perception with few samples. Joshua C. Gellers provides the book of the week, examining the case for Rights for Robots. And Google AI releases Verse by Verse, which draws upon the writings of various poets to help users generate their own poems, of which Andy and Dave both share examples. Click here to visit our website and explore the links mentioned in the episode. | |||
11 Mar 2022 | Slightly Unconscionable | 00:38:25 | |
Andy and Dave discuss the latest in AI news and research, including a GAO report on AI – Status of Developing and Acquiring Capabilities for Weapon Systems [1:01]. The U.S. Army has awarded a contract for the demonstration of an offensive drone swarm capability (the HIVE small Unmanned Aircraft System), seemingly similar but distinct from DARPA’s OFFSET demo [4:11]. A ‘pitch deck’ from Clearview AI reveals their intent to expand beyond law enforcement and aim to have 100B facial photos in its database within a year [5:51]. Tortoise Media releases a global AI index that benchmarks nations based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real and fake (generated by GANs) faces [10:30]. MIT, Aberdeen, and the Centre of Governance of AI look at trends of computation in machine learning, identifying three eras and trends, including a ‘large-scale model’ trend where large corporations use massive training runs [13:37]. A tweet from the chief scientist at OpenAI, speculating on the ‘slightly conscious’ attribute of today’s large neural networks, sparks much discussion [17:23]. While a white paper in the International Journal of Astrobiology examines what intelligence might look like at the planetary level, placing Earth as an immature Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies Program member Sam Bendett returns for a quick update on autonomy and AI in the Ukraine-Russia conflict [23:30]. https://www.cna.org/CAAI/audio-video | |||
17 Jun 2022 | RAI, consumers’ co-operative | 00:44:23 | |
CNA colleagues Kaia Haney and Heather Roff join Andy and Dave to discuss Responsible AI. They discuss the recent Inclusive National Security seminar on AI and National Security: Gender, Race, and Algorithms. The keynote speaker, Elizabeth Adams spoke on the challenges that society faces in integrating AI technologies in an inclusive fashion, and she identified ways in which consumers of AI-enabled products can ask questions and engage on the topic of inclusivity and bias. The group also discusses a variety of topics around the many challenges that organizations face in operationalizing these ideas, including a revisit of the findings from recent medical research, which found an algorithm was able to identify the race of a subject from x-rays and CAT scans, even with identifying features removed.
Inclusive National Security Series: AI and National Security: Gender, Race and Algorithms Inclusive National Security webpage Sign up for the InclusiveNatSec mailing list here. | |||
05 Mar 2021 | The Little Ingenuity That Could | 00:38:11 | |
Andy and Dave discuss the latest AI news including, Mars landing of the Perseverance and its AI-related capabilities, along with its mini-helicopter, Ingenuity. Researchers from Liverpool use machine learning to predict which mammalian hosts can generate novel coronaviruses. Researchers from Estonia and France create artificial human genomes using generative neural networks. A coalition of over 40 organizations have written a letter to ask that President Biden ban the federal use of and funding of facial recognition technology. The law firm Gibson Dunn releases a 2020 Annual Review of AI and Automated Systems, which also contains a great summary of policy and regulatory developments in the last year. In research, scientists at the Commonwealth Scientific and Industrial Research Organisation in Australia use AI to manipulate human behavior, steering participants toward particular actions. Researchers in the Netherlands demonstrate that predictive coding in recurrent neural networks naturally arises as a consequence of minimizing energy consumption. Research in Nature Communications demonstrates a multisensory neural networks that integrates information from all five human senses. The report of the week comes from CSET author Matthew Mittlelsteadt, which describes AI Verification: Mechanisms to Ensure AI Arms Control Compliance. The first book of the week comes from Moritz Hardt, on Patterns, Predictions, and Actions: A story about machine learning. And the fun site of the week takes a look at the works of painter Wassily Kandinsky, who was also a synesthete (experiencing the fusion of the senses), and offers insights into what he might have heard from looking at his paintings. The second book of the week provides some great information on Synaesthesia – Opinions and Perspectives. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
17 Sep 2021 | Horrorscope | 00:43:54 | |
Andy and Dave discuss the latest in AI news and research, including: 0:57: The Allen Institute for AI and others come together to create a publicly available “COVID-19 Challenges and Directions” search engine, building off of the corpus of COVID-related research. 5:06: Researchers with the University of Warwick perform a systematic review of test accuracy for the use of AI in image analysis of breast cancer screening and find most (34 or 36) AI systems were less accurate than a single radiologist, and all were less accurate than a consensus of two or more radiologists (among other concerning findings). 10:19: A US judge rejects an appeal for the AI system DABUS to own a patent, noting that US federal law requires an “individual” to be an owner, and the legal definition of an “individual” is a natural person. 17:01: The US Patent and Trademark Office uses machine learning to analyze the history of AI in patents. 19:42: BCS publishes Priorities for the National AI Strategy, as the UK seeks to set global AI standards. 20:42: In research, MIT, Northeastern, and U Penn explore the challenges of discerning emotion from a person’s facial movements (which largely relates to context), and highlight the reasons why facial recognition algorithms will struggle with this task. 28:02: GoogleAI uses diffusion models to generate high fidelity images; the approach slowly adds noise to corrupt the training data, and then using a neural network to reverse that corruption. 35:07: Springer-Verlag makes AI for a Better Future, by Bernd Carsten Stahl, available for open access. 36:19: Thomas Smith, co-founder of Gado Images, chats with GPT-3 about the COVID-19 pandemic and finds that it provides some interesting responses to his questions. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
06 May 2022 | Leggo my Stego! | 00:28:33 | |
Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24]. Careers: https://us61e2.dayforcehcm.com/CandidatePortal/en-US/CNA/Posting/View/1624 “RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm EST on June 7th at https://www.eventbrite.com/e/ai-and-national-security-gender-race-and-algorithms-tickets-332642301077?aff=Podcast.” | |||
07 Aug 2020 | Remember, Remember, the Fakes of November | 00:39:21 | |
In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center release a report on “AI and National Security,” the first of four “meant to be a roadmap for Washington’s future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a “photo enhancer” that uses machine learning to restore old photos. Click here to visit our website and explore the links mentioned in the episode. | |||
02 Apr 2021 | The Earth Dies Dreaming | 00:37:27 | |
Andy and Dave discuss the latest in AI news, including a letter from the National Transportation Safety Board that asks the National Highway Traffic Safety Administration to regulate more strictly autonomous vehicles and driver assistance technologies; of note, the letter also uses Tesla as an example, stating that the company is using its customers to beta test its full self-driving technology on public roads. KMPG surveys business leaders on a variety of AI-related topics and finds that, among other things, many more leaders have the perception that AI tech is moving out too quickly. Researchers at Aston University announce a three-year study to explore the utility of human brain stem cells grown on a microchip, a so-called Neu-ChiP. Researchers from Norway and Australia unveil DyRET, a quadruped robot that can adapt its morphology (such as growing taller or shorter) as it encounters different environments. And Japanese researchers describe a decoded neurofeedback (DecNef) method, which uses fMRI to visualize brain activity and then calculate the similarity between real-time brain activity and brain activity patterns corresponding to specific pre-established memory and mental states. Microsoft’s PowerPoint has a Presenter Coach that will listen and watch your presentation and give you pointers on speech patterns, pacing, attention, body language, and other attributes. The two main research items both involve AI agents playing in the Atari Learning Environment (57 games from Atari’s library), and both with groundbreaking results in different ways: Uber AI and OpenAI use a model-free approach in Go-Explore, which uses a concept of “first return (to previous states), and then explore; GoogleAI use a world model approach with DreamerV2, which learns behaviors inside a separately trained world model (they also recommend a “clipped record mean” to aggregate scores across the various games). The survey of the week looks at Deepfakes Generation & Detection. Marjorie McShane and Sergei Nirenburg publish Linguistics for the Age of AI, arguing that researchers must place linguistics front and center for machines to achieve human-level language understanding, with big data and stats approaches as contributing methods. And in the video of the week, Steven Gouveia has produced a documentary on The Age of AI. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
31 Jul 2020 | Atlas Surveilled | 00:39:36 | |
In COVID-related AI news, Andy and Dave discuss research that provides a comprehensive survey on applications of AI in fighting COVID-19. The Stanford Institute for Human-Centered AI and the AI Initiative at the Future Society launch a global alliance: Collective and Augmented Intelligence against COVID-19 (CAIAC). MIT and the IBM Watson AI Lab publish a paper that suggests a computational limit to progress in deep learning. The Atlas of Surveillance provides an open-source look at technologies that law enforcement are using across the US, to include facial recognition and drones. Similarly, Surfshark has compiled information on the status of facial recognition technology around the globe, along with additional useful information. MIT finds systematic shortcomings in the ImageNet dataset, with an observation that the crowdsourcing data collection pipeline can cause “misalignments.” Research from Google Brain shows that “self-attention” can allow agents to identify task-critical visual hints, and ignore task-irrelevant elements. UC Berkeley, Google, CMU, and Facebook demonstrate “one policy to rule them all,” where they use one global policy to control the movement of a wide variety of agent morphologies (which would normally require training and tuning for each separate morphology). The Army’s Cyber Institute releases the “Invisible Force” graphic novel, which examines potential uses of AI technology in a future fictional scenario. Alife 2020 makes a compilation of its July conference available, clocking in a nearly 800 pages. And Gwern examines the creative side of GPT-3 through poetry, humor, and other probing interactions Click here to visit our website and explore the links mentioned in the episode. | |||
02 Dec 2022 | Battledrone Galactica | 00:36:15 | |
Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The Texas Attorney General files a lawsuit against Google alleging unlawful capture and use of biometric data of Texans without their consent. DARPA flies its final flight of ALIAS, an autonomous system outfitted on a UH-60 Black Hawk. And Rafael’s DRONE DOME counter-UAS system wins Pentagon certification. In research, Meta publishes work on Cicero, an AI agent that combines Large Language Models with strategic reasoning to achieve human-level performance in Diplomacy. Meta researchers also publish work on ESMFold, an AI algorithm that predicts structures from some 600 million proteins, “mostly unknown.” And Meta also releases (then takes down due to misuse) Galactica, a 120B parameter language model for scientific papers. In a similar, but less turbulent vein, Explainpaper provides the ability to upload a paper, highlight confusing text, and ask queries to get explanations. CRC Press publishes online for free Data Science and Machine Learning: Mathematical and Statistical Methods, a thorough text for upper-class college or grad-school level. And finally, the video of the week features Andrew Pickering, Professor Emeritus of sociology and philosophy at the University of Exeter, UK, with a video on the Cybernetic Brain, and the book of the same name, published in 2011.
| |||
18 Dec 2020 | Will You, Won’t You Join the DANs? | 00:38:34 | |
In COVID-related AI news, Andy and Dave discuss a report from MIT that identifies gaps in coverage from COVID vaccines, and uses machine learning to identify peptide additions to increase their efficacy. The GAO and the National Academy of Medicine release a combined report on AI in health care. Nature provides access to a large collection of open datasets related to COVID research and information. In non-COVID-related AI news, President Trump signs an executive order on the governmental development of AI, which includes a requirement for OMB to produce a roadmap by the end of May 2021. The FY21 National Defense Authorization Act boots the JAIC’s role and performance, to include a funding stream for acquisition authority. The ML-Reproducibility Challenge 2020 kicks off, with submissions due by 29 January 2021. Researchers in China announce the creation of a photonic quantum computer that achieves quantum supremacy in conducting Gaussian boson sampling. The Bjarke Ingles Group unveils its plans to create an “AI city,” a tech-hub in Chongqing, China. And the Navy’s uncrewed Overlord test vessel completes a 4700 nautical mile journey with minimal human assistance, to include passage through the Panama Canal. Researchers at Georgia State University demonstrate an approach to continual learning with deep artificial neurons (DANs), a neural network, where the neurons are themselves small deep neural networks. And researchers at Tencent AI Lab demonstrate an almost society-of-agents approach to creating an a deep reinforcement algorithm that can play multi-player online battle arena (MOBA) games. Click here to visit our website and explore the links mentioned in the episode. | |||
29 Jan 2023 | Dr. GPT | 00:36:54 | |
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department’s approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone’s voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine. | |||
02 Oct 2020 | the social bot network | 00:19:29 | |
Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national security community. Click here to visit our website and explore the links mentioned in the episode. | |||
04 Nov 2022 | Drawing Outside the Box | 00:33:19 | |
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers’ views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute. | |||
25 Dec 2020 | The Final Sunbrawler | 00:44:25 | |
Andy and Dave discuss the recent announcement that the U.S. Department of Defense announces that it will adopt the Defense Innovation Board’s detailed principles for using AI. The European Commission releases its white paper on AI. The University of Buffalo’s AI Institute receives a grant to study gamers’ brains in order to build AI military robots. Microsoft announces Turing-NLG, a 17-billion parameter language model. MIT’s CSAIL demonstrates TextFooler, which makes synonym-like substitutions of words, the results of which can severely degrade the accuracy of NLP classifiers. Researchers from McAfee show simple tricks to fool Tesla’s Mobileye EyeQ3 camera. And Andy and Dave conclude with a discussion with Professor Josh Bongard, from the University of Vermont, on his recent “xenobots” research. | |||
17 Jul 2020 | A Tesseract to Follow | 00:39:40 | |
In COVID-related AI news, Purdue University has built a website that tracks global response to social distancing, by pulling live footage and images from over 30,000 cameras in 100 countries. Simon Fong, Nilanjan Dey, and Jyotismita Chaki have published Artificial Intelligence for Coronavirus Outbreak, which examines AI’s contribution to combating COVID-19. Researchers at Harvard and Boston Children’s Hospital use a “regular” Bayesian model to identify COVID-19 hotspots over 14 days before they occur. In non-COVID AI news, the acting director of the JAIC announces a shift to enabling joint warfighting operations. The DoD Inspector General releases an Audit of Governance and Protection of DoD AI Data and Technology, which reveals a variety of gaps and weaknesses in AI governance across DoD. Detroit Police Chief James Craig reveals that the police department’s experience with facial recognition technology resulted in misidentified people about 96% of the time. Over 1400 mathematicians sign and deliver a letter to the American Mathematical Society, urging researchers to stop working on predictive-policing algorithms. DARPA awards the Meritorious Public Service Medal to Professor Hava Siegelmann for her creation and research in the Lifelong Learning Machines Program. And Horace Barlow, one of the founders of modern visual neuroscience, passed away on 5 July at the age of 98. In research, Udrescu and Tegmark release AI Feynman 2.0, with unsupervised learning of equations of motion by viewing objects in raw and unlabeled video. Researchers at CSAIL, NVidia, and the University of Toronto create the Visual Causal Discovery Network, which learns to recognize underlying dependency structures for simulated fabrics, such as shirts, pants, and towels. In reports, the Montreal AI Ethics Institute publishes its State of AI Ethics. In the video of the week, Max Tegmark discusses the previously mentioned research on equations of motion, and also discusses progress in symbolic regression. And GanBreeder upgrades to ArtBreeder, which can create realistic-looking images from paintings, cartoons, or just about anything. Click here to visit our website and explore the links mentioned in the episode. | |||
28 Aug 2020 | Highway to the Danger Zone | 00:17:57 | |
With Season 3 drawing to a close, Andy and Dave decided to focus this discussion entirely on the latest results from DARPA’s Air Combat Evolution (ACE) program. On 20 August, DARPA held a contest between 8 competitors, and pitted their AI agents in simulated combat against each other, and against a human pilot (who used a VR system). Heron Systems won the event, beating out the other AI agents, and also not allowing the human pilot to attain a valid targeting solution. Andy and Dave discuss the results, the limitations, and the broader context of these results in light of other research and announcements. Click here to visit our website and explore the links mentioned in the episode. | |||
07 May 2021 | Mnemosyne That Before | 00:37:25 | |
Andy and Dave discuss the latest AI news and research, including a blog post from the Federal Trade Commission that businesses can and will be held accountable for the fairness of their algorithms. A bipartisan coalition of U.S. Senators has introduced the “Fourth Amendment Is Not for Sale Act,” which would ban law enforcement and intelligence agencies from buying data on people in the U.S. and about Americans abroad, if that data was obtained from a user’s account or device, through deception, hacking or other violations of privacy policies or terms of service. Bob Work releases his seven Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities; these principles go into much greater detail about employment and provide a useful way to discuss issues surrounding autonomous weapons. The Congressional Research Service provides a short, but dense overview on Lethal Autonomous Weapon Systems. The Ozcan Research Group and UCLA publish research that identifies handwritten numbers by using an optical network made from 3D printed wafers that diffract polarized light. Project CETI aims to decode whale language using decades of recorded whale sounds. Researchers from the Centre for Neuroscience and the Indian Institute of Science explore whether the similarities and differences in how deep networks “see” compared to humans, by examining 13 specific perceptual effects, such as mirror confusion. Researchers from Stanford and UCSD examine how children’s drawing and recognition of visual concepts change over time. On a similar topic, other research explores the relationship between episodic memory and generalization, finding that the relationship changes as children develop. The book of the week is an open access paper from Stanford, which examines and provides tools for vector embedding of large sets of data, to include minimizing distortion. Ben Vickers and K. Allado-McDowell publish the Atlas of Anomalous AI, with reference to the Mnemosyne Atlas. Andy and Dave accidentally change the pronunciation of “neh-meh-zeen” and completely destroy the joke of this week’s podcast title. And take a look at the “fun” site of the week, which puts an eye on webcams, with the EyeCam, the webcam that looks like and mimics the movements of the human eye. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
05 Feb 2021 | Sokoban, and Thanks for All the Fish! | 00:36:41 | |
Listener Survey Click here to visit our website and explore the links mentioned in the episode. | |||
13 Jan 2023 | EmerGPT | 00:36:05 | |
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America’s strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China’s Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on “deepfakes,” requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a “multidisciplinary open peer review dataset” (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a “code red” for Google’s search business over the success of OpenAI’s ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to “polish” writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe.
| |||
17 Dec 2021 | Is it alive or is it Xeno-rex? | 00:38:15 | |
Andy and Dave discuss the latest in AI news and research, starting with the US Department of Defense creating a new position of the Chief Digital and AI Officer, subsuming the Joint AI Center, the Defense Digital Service, and the office of the Chief Data Officer [0:32]. Member states of UNESCO adopt the first-ever global agreement on the ethics of AI, which includes recommendations on protecting data, banning social scoring and mass surveillance, helping to monitor and evaluate, and protecting the environment [3:26]. The European Digital Rights and 119 civil society organizations launch a collective call for an AI Act to articulate fundamental rights (for humans) regarding AI technology and research [6:02]. The Future of Life Institute releases Slaughterbots 2.0: “if human: kill()” ahead of the 3rd session in Geneva of the Group of Governmental Experts discussing lethal autonomous weapons systems [7:15]. In research, Xenobots 3.0, the living robots made from frog cells, demonstrate the ability to replicate themselves kinematically, at least for a couple of generations (extended to four generations by using an evolutionary algorithm to model ideal structures for replication) [12:23]. And researchers from DeepMind, Oxford, and Sydney demonstrate the ability to collaborate with machine learning algorithms to discover new results in mathematics (in knot theory and representation theory); though another researcher attempts to dampen the utility of the claims. [17:57] And finally, Dr. Mike Stumborg joins Dave and Andy to discuss research in Human-Machine Teaming, why it’s important, and where the research will be going [21:44]. | |||
23 Jul 2021 | Rebroadcast: The Robohattan Project | 00:35:45 | |
In COVID-related AI news, Andy and Dave discuss survey results from Algorithmia, which shows that IT directors at large companies are looking to spend more money on AI/ML projects due to the pandemic. In regular AI news, the bipartisan Future of Defense Task Force releases its 2020 report, which includes the suggestion of using the Manhattan Project as a model to develop AI technologies. The US and UK sign an agreement to work together on trustworthy AI. Facebook AI releases Dynabench as a way to dynamically benchmark the performance of machine learning algorithms. Amsterdam and Helsinki launch AI registers that explain how they use algorithms, in an effort to increase transparency. In research, the Allen Institute of AI, University of Washington, and University of North Carolina publish research on X-LXMERT (learning cross-modality encoder representations from transformers), which trains GPT-3 on both text and images, to then generate images from scratch by providing descriptions (e.g., a large clock tower in the middle of a town). Researchers at Swarthmore College and Los Alamos National Labs demonstrate the challenges that neural networks of various sizes have in learning Conway’s Game of Life. Maria Jeansson, Claudio Sanna, and Antoine Cully create a stunning visual infographic on the “automated futures” technologies. And Joshua Epstein, a longtime expert in agent-based modeling, provides the European Social Stimulation Association Award Keynote speech. | |||
27 Aug 2021 | Beauty Is in the AI of the Perceiver | 00:35:44 | |
Andy and Dave discuss the latest in AI news, including an upgraded version of OpenAI’s CoPilot, called, Codex, which can not only complete code but creates it as well (based on natural language inputs from its users). The National Science Foundation is providing $220 million in grants to 11 new National AI Research Institutes (including two fully funded by the NSF). A new DARPA program seeks to explore how AI systems can share their experiences with each other, in Shared-Experience Lifelong Learning (ShELL). The Senate Committee on Homeland Security and Governmental Affairs introduces two AI-related bills: the AI Training Act (to establish a training program to educate the federal acquisition workforce), and the Deepfake Task Force Act (to task DHS to produce a coordinated plan on how a “digital content provenance” standard might assist with decreasing the spread of deepfakes). And the Inspectors General of the NSA and DoD partner to conduct a joint evaluation of NSA’s integration of AI into signals intelligence efforts. In research, DeepMind creates the Perceiver IO architecture, which works across a wide variety of input and output spaces, challenging the idea that different kinds of data need different neural network architectures. DeepMind also publishes PonderNet, which learns to adapt the amount of computation based on the complexity of the problem (rather than the size of the inputs). Research from MIT uses the corpus of US patents to predict the rate of technological improvements for all technologies. The European Parliamentary Research Service publishes a report on Innovative Technologies Shaping the 2040 Battlefield. Quanta Magazine publishes an interview with Melanie Mitchell, which includes a deeper discussion on her research in analogies. And Springer-Verlag makes available for free An Introduction to Ethics in Robotics and AI (by Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh). Follow the link below to visit our website and explore the links mentioned in the episode.
| |||
04 Sep 2020 | Rebroadcast: What is AI? | 00:44:51 | |
CNA’s Center for Autonomy and Artificial Intelligence kicks off its first panel for 2019 with a live recording of AI with AI! Andy and Dave take a step back and look at the broader trends of research and announcements involving AI and machine learning, including: a summary of historical events and issues; the myths and hype, looking at expectations, buzzwords, and reality; hits and misses (and more hype!), and some of the many challenges of why AI is far from a panacea. Click here to visit our website and explore the links mentioned in the episode. | |||
30 Jul 2021 | The AI Is Smarter on the Other Side of the FENCE | 00:31:52 | |
Andy and Dave discuss the latest in AI news and research, including the new DARPA FENCE program (Fast Event-based Neuromorphic Camera and Electronics), which seeks to create event-based cameras that only focus on pixels that have changed in a scene. NIST proposed an approach for reducing the risk of bias in AI and has invited the public to comment and help improve it. Researchers from the University of Colorado, Boulder use a machine learning model to learn physical properties in electronics building blocks (such as clumps of silicon and germanium atoms), as a way to predict how larger electronics components will work or fail. Researchers in South Korea create an artificial skin that mimics human tactile recognition, and couple it with a deep learning algorithm to classify surface structures (with an accuracy of 99.1%). A survey from IE University shows, among other things, that 75% of people surveys in China support replacing parliamentarians with AI, while in the US, 60% were opposed to it. A scientist with <> uses machine learning to learn Rembrandt’s style and then recreate missing pieces of the painter’s “The Night Watch.” Researchers at Harvard, San Diego, Fujitsu, and MIT present methodical research on demonstrating how classification neural networks are susceptible to small 2D transformations and shifts, image crops, and changes in object colors. The GAO releases a report on Facial Recognition Technology, surveying 42 federal agencies, and finds a general lack of accountability in the use of the technology. The WHO releases a report on Ethics and Governance of AI for Health. In rebuttal to DeepMind’s “Reward is enough” paper, Roitblat and Byrnes pens separate essays on why “Reward is not enough.” An open-access book by Wang and Barabasi looks at the Science of Science. Julia Schneider and Lena Ziyal join forces to provide a comical essay on AI: We Need to Talk, AI. And the National Security Commission on AI holds an all-day summary on Global Emerging Technology. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
03 Sep 2021 | The WHO AI: I Can’t Explain (My Generation) | 00:39:49 | |
Andy and Dave discuss the latest in AI news, including an overview of Tesla’s “AI Day,” which among other things, introduced the Dojo supercomputers specialized for ML, the HydraNet single deep-learning model architecture, and a “humanoid robot,” the Tesla Bot. Researchers at Brown University introduce neurograins, grain-of-salt-sized wireless neural sensors, for which they use nearly 50 to record neural activity in a rodent. The Associated Press reports on the flaws in ShotSpotter’s AI gunfire detection system, and one case which used such evidence to send a man to jail for almost a year before a judge dismissed the case. The Department of the Navy releases its Science and Technology Strategy for Intelligent Autonomous Systems (publicly available), including an Execution Plan (available only through government channels). The National AI Research Resource Task Force extends its deadline for public comment in order to elicit more responses. The Group of Governmental Experts on Certain Conventional Weapons holds its first 2021 session for the discussion of lethal autonomous weapons systems; their agenda has moved on to promoting a common understanding and definition of LAWS. And Stanford’s Center for Research on Foundation Models publishes a manifesto: On the Opportunities and Risks of Foundation Models, seeking to establish high level principles on massive models (such as GPT3) upon which many other AI capabilities build. In research, Georgie Institute of Technology, Cornell University, and IBM Research AI examine how the “who” in Explainable AI (e.g., people with or without a background in AI) shapes the perception of AI explanations. And Alvy Ray Smith pens the book of the week, with A Biography of the Pixel, examining the pixel as the “organizing principle of all pictures, from cave paintings to Toy Story.” Follow the link below to visit our website and explore the links mentioned in the episode.
| |||
08 Apr 2022 | Bridge on the River NukkAI | 00:34:40 | |
Andy and Dave discuss the latest in AI news and research, including DoD’s 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available. The European Parliament’s Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft (including that the EU should not regulate AI as a technology, but rather focus on risk). Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback. Researchers from Collaborations Pharmaceuticals and King’s College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods. And Kevin Pollpeter, from CNA’s China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict. https://www.cna.org/news/AI-Podcast
| |||
04 Jun 2021 | Just the Tip of the Skyborg | 00:34:55 | |
Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20 To RSVP contact Larry Lewis at LewisL@cna.org. Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US’s AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense Secretary Lloyd Austin lays out his vision for the Department of Defense in his first major speech, stressing the importance of emerging technology and rapid increases in computing power. A report from the Allen Institute for AI shows that China is closing in on the US in AI research, expecting to become the leader in the top 1% of most-cited papers in 2023. In research, Ziming Liu and Max Tegmark introduce AI Poincaré, an algorithm that auto-discovers conserved quantities using trajectory data from unknown dynamics systems. Researchers enable a paralyzed man to “text with his thoughts,” reaching 16 words per minute. The Stimson Center publishes A New Agenda for US Drone Policy and the Use of Lethal Force. The Onlife Manifesto: Being Human in a Hyperconnected Era, first published in 2015, is available for open access. And Cade Metz publishes Genius Makers, with stories of the pioneers behind AI. Click here to visit our website and explore the links mentioned in the episode. | |||
29 Jan 2021 | How Machines Judge Humans | 00:39:51 | |
Listener Survey Click here to visit our website and explore the links mentioned in the episode. | |||
01 Oct 2021 | Chasing AIMe | 00:35:33 | |
Andy and Dave discuss the latest in AI news and research, including: [1:28] Researchers from several universities in biomedicine establish the AIMe registry, a community-driven reporting platform for providing information and standards of AI research in biomedicine. [4:15] Reuters publishes a report with insight into examples at Google, Microsoft, and IBM, where ethics reviews have curbed or canceled projects. [8:11] Researchers at the University of Tübingen create an AI method for significantly accelerating super-resolution microscopy, which makes heavy use of synthetic training data. [13:21] The US Navy establishes Task Force 59 in the Middle East, which will focus on the incorporation of unmanned and AI systems into naval operations. [15:44] The Department of Commerce establishes the National AI Advisory Committee, in accordance with the National AI Initiative Act of 2020. [19:02] Jess Whittlestone and Jack Clark publish a white paper on Why and How Governments Should Monitor AI Development, with predictions into the types of problems that will occur with inaction. [19:02] The Center for Security and Emerging Technology publishes a series of data-snapshots related to AI research, from over 105 million publications. [23:53] In research, Google Research, Brain Team, and University of Montreal take a broad look at deep reinforcement learning research and find discrepancies between conclusions drawn from point estimates (fewer runs, due to high computational costs) versus more thorough statistical analysis, calling for a change in how to evaluate performance in deep RL. [30:13] Quebec AI Institute publishes a survey of post-hoc interpretability on neural natural language processing. [31:39] MIT Technology Review dedicates its Sep/Oct 2021 issues to The Mind, with articles all about the brain. [32:05] Katy Borner publishes Atlas of Forecasts: Modeling and Mapping Desirable Futures, showing how models, maps, and forecasts inform decision-making in education, science, technology, and policy-making. [33:16] DeepMind in collaboration with University College London offers a comprehensive introduction to modern reinforcement learning, with 13 lectures (~1.5 hours each) on the topic. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
11 Sep 2020 | Some Pigsel | 00:41:44 | |
In COVID-related AI news, Andy and Dave discuss an effort from Google and Harvard to provide county-level forecasts on COVID-19 for hospitals and first responders. The National Library of Medicine, National Center of Biotechnology Information, and NIH provide COVID-19 literature analysis with interesting data analytic and visualization tools. In regular AI news, Elon Musk demonstrates the latest iteration of Neuralink, complete with pig implantees. The UK attempted a prediction system for Most Serious Violence, but found that it had serious flaws. Amazon awards a $500k “Alexa Prize” to Emory University students for their Emora chatbot, which scored a 3.81 average rating across categories. The Bipartisan Policy Center releases two reports on AI. And Russell Kirsch, inventor of the pixel and other groundbreaking technology, passed away on 11 August at the age of 91. In research, three papers tackle the problem of reconstructing 3D (in some cases, 4D) models of locations based on tourist photos taking from different vantage points and at different times: the NeRF (Neural Radiance Fields) model and the Plenoptic model. The Human Rights Watch releases a report summarizing Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. Springer-Verlag releases yet-another-freebie with An Introduction to Ethics and Robotics in AI. And the Conference on Computer Vision & Pattern Recognition has posted the papers and videos from its June 2020 session. | |||
03 Dec 2021 | Revenge of the AWS | 00:42:46 | |
Andy and Dave discuss the latest in AI news and research, [0:53] starting with OpenAI’s announcement that it is making GPT-3 generally available through its API (though developers still require approval for production-scale applications). [3:09] For DARPA’s Gremlins program, two Gremlin Air Vehicles “validated all autonomous formation flying positions and safety features,” and one of the autonomous aircraft demonstrated airborne recovery to a C-130. [4:54] After three years, DARPA announces the winners of its Subterranean Robot Challenge, awarding prizes for teams operating in the “real-world” in virtual space. [7:03] The Defense Information Systems Agency released its Strategic Plan for 2022 through 2024, which includes plans to employ AI capabilities for defensive cyber operations. [8:08] The Department of Defense announces a new cloud initiative to replace the failed JEDI contract, with invitations to Amazon, Microsoft, Google, and Oracle to bid. [11:52] In research, DeepMind, Google Brain, and World Chess Champion Vladimir Kramnik join forces to peer into the guts of AlphaZero, with initial results showing strong evidence for the existence of “human-understandable concepts of surprising complexity” within the neural network. [17:48] Andrea Roli, Johannes Jaeger, and Stu Kauffman pen a white paper on how organisms come to know the world, and from these observations, derive fundamental limits on artificial general intelligence. [20:34] MIT Press makes available an elementary introduction to Bayesian Models of Perception and Action, by Wei Ji Ma, Konrad Paul Kording, and Daniel Goldreich. [23:40] And finally, Sam Bendett and Jeff Edmonds drop by for a chat on the latest and greatest in Russian AI and Autonomy – including an update on recent military expos and other AI-related events happening in Russia. | |||
22 Apr 2022 | The Amulet of NeRFdor | 00:38:10 | |
Andy and Dave discuss the latest in AI news and research, including a proposal from the Ada Lovelace Institute with 18 recommendations to strengthen the EU AI Act. [0:57] NVidia updates its Neural Radiance Fields to Instant NeRF, which can reconstruct a 3D scene from 2D images nearly 1000 times faster than other implementations. [2:53] Nearly 100 Chinese-affiliated researchers publish a 200-page position paper about large-scale models, a “roadmap.” [4:13] In research, GoogleAI introduces PaLM (Pathway Language Model), at 540B parameters, which demonstrates the ability for logical inference and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates greater resolution and diversity to similar technology from OpenAI, GLIDE, but not rated as well by humans, and DALL-E 2 still has challenges with ‘binding attributes.’ [11:32] A white paper from Gary Marcus look at ‘Deep Learning Is Hitting a Wall: What would it take for AI to make real progress?’ which includes an examination of a symbol-manipulation system that beat the best deep learning systems at playing ASCII game NetHack. [16:10] Professor Chad Jenkins from the University of Michigan returns to discuss the latest developments, including the upcoming Department of Robotics, and a robotics undergraduate degree. [19:10] | |||
25 Sep 2020 | CONSORTing with the GPT | 00:36:51 | |
In COVID-related AI news, another concerning report, this time in Nature Medicine, found “serious concerns” with 20,000 studies on AI systems in clinical trials, with many reporting only the best-case scenarios; in response, an international consortium has developed CONSORT-AI, reporting guidelines for clinical trials involving AI. In Nature, an open dataset provides a collection and overview of governmental interventions in response to COVID-19. In regular AI news, the DoD wraps up its 2020 AI Symposium. And the White House nominates USMC Maj. Gen. Groen to lead the JAIC. The latest report from the NIST shows that facial recognition technology still struggles to identify people of color. Portland, Oregon passes the toughest ban on facial recognition technology in the US. And The Guardian uses GPT-3 to generate some hype. In research, OpenAI demonstrates the ability to apply transformer-based language models to the task of automated theorem proving. Research from Berkeley, Columbia, and Chicago proposes a new test to measure a text model’s multitask accuracy, with 16,000 multiple choice questions across 57 task areas. A report from AI Now takes a look at regulating biometrics, which includes tech such as facial recognition. And the 37th International Conference on Machine Learning makes its proceedings available online. Click here to visit our website and explore the links mentioned in the episode. | |||
25 Mar 2022 | A PIG GR_PH | 00:34:48 | |
Andy and Dave discuss the latest in AI news and research, including an announcement that Ukraine’s defense ministry has begun to use Clearview AI’s facial recognition technology and that Clearview AI has not offered the technology to Russia [1:10]. In similar news, WIRED provides an overview of a topic mentioned in the previous podcast – using open-source information and facial recognition technology to identify Russian soldiers [2:46]. The Department of Defense announces its classified Joint All-Domain Command and Control (JADC2) implementation plan, and also provides an unclassified strategy [3:24]. Stanford University Human-Centered AI (HAI) releases its 2022 AI Index Report, with over 200 pages of information and trends related to AI [5:03]. In research, DeepMind, Oxford, and Athens University present Ithaca, a deep neural network for restoring ancient Greek texts, while including both geographic and chronological attribution; they designed the system to work *with* ancient historians, and the combination achieves a lower error rate (18.3%) than either alone [10:24]. NIST continues refining its taxonomy for identifying and managing bias in AI, to include systemic bias, human bias, and statistical/computational bias [13:51]. Authors Pavel Brazdil, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren, Springer-Verlag makes Metalearning available for download, which provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA’s Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation in the Ukraine-Russian conflict [17:15]. https://www.cna.org/CAAI/audio-video | |||
20 Aug 2021 | AI Today, Tomorrow, & Forever | 00:42:45 | |
Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education. Related Links CPMAI Methodology: https://www.cognilytica.com/cpmai/ Cognilytica website: https://www.cognilytica.com/ AI in Government community: https://www.aiingovernment.com/ Cognilytica: @Cognilytica Kathleen Walch: @Kath0134 Ron Schmelzer: @rschmelzer | |||
10 Sep 2021 | Pet Shop Bots: BEHAVIOR | 00:35:03 | |
Andy and Dave discuss the latest in AI news and research, including: 0:46: The GAO releases a more extensive report on US Federal agency use of facial recognition technology, including what purposes. 3:24: The US Department of Homeland Security Science and Technology Directorate publishes its AI and ML Strategic Plan, with an implementation plan to follow. 5:39: Ada Lovelace Institute, AI Now Institute, and Open Government Partnership publish a global study on Algorithmic Accountability for the Public Sector, which focuses on accountability mechanisms stemming from laws and policy. 9:04: Research from North Caroline State University shows that the benefits of autonomous vehicles will outweigh the risks, with proper regulation. 13:18: Research Section Introduction 14:24: Researchers at the Allen Institute for AI and the University of Washington demonstrate that artificial agents can learn generalizable visual representation during interactive gameplay, embodied within an environment (AI2-THOR); agents demonstrated knowledge of the principles of containment, object permanence, and concepts of free space. 19:37: Researchers at Stanford University introduce BEHAVIOR (Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments), which establishes benchmarks for simulation of 100 activities that human often perform at home. 24:02: A survey examines the dynamics of research communities and AI benchmarks, suggesting that hybrid, multi-institution, and persevering communities are the ones more likely to improve state-of-the-art performance, among other things. 28:54: Springer-Verlag makes Representation Learning for Natural Language Processing available online. 32:09: Terry Sejnowski and Stephen Wolfram publish a three-hour discussion on AI and other topics. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
16 Apr 2021 | Xenomania | 00:37:19 | |
Andy and Dave discuss the latest in AI news, including the resignation of Samy Bengio from Google Brain, which fired ethicists Gebru in December and Mitchell in February. The Joint AI Center releases its request for proposals on Data Readiness for AI Development (DRAID). DARPA prepares for the quantum age with a program for Quantum Computer Benchmarking. And a separate DARPA program seeks to enable fully homomorphic encryption with its Data Protection in Virtual Environments (DPRIVE) program. A poll from Hyland on digital distrust shows that Americans think that over the next decade, AI has the most potential to cause harm. Amazon introduces the next level of “biometric consent” required for its delivery drivers, which includes an always-on camera observing the driver and gathering other data; drivers will lose their jobs if they do not consent to the monitoring. And Josh Bongard of the University of Vermont and Michael Levin of Tufts University along with other researchers from Wyss and Harvard join together to form the Institute for Computationally Designed Organisms (ICDO), which will focus on “AI-driven designs of new life forms.” In research, Bongard publishes the latest iteration of its mobile living machines, with Xenobots II, using frog cells to create life forms capable of motion, memory, and manipulation of the world around them. Researchers from the universities of Copenhagen, York, and Shanghai use neural cellular automata to grow 3D objects and functional machines within the Minecraft world. And OpenAI Robotics demonstrates the ability for a robotic arm to solve manipulation tasks, including tasks with previously unseen goals and objects, with asymmetric self-play. And the Book / Fun Site of the Week comes from the Special Interest Group on Harry Q. Bovik (SIGBOVIK), which presents “April Fools” research, descriptions of truly absurd, but fascinating, research. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
12 Mar 2021 | Schrödinger’s Slime Mold | 00:35:11 | |
Andy and Dave discuss the latest AI news, which includes lots of new reports, starting with the release of the final report of the National Security Commission on AI, with over 750 pages that outlines steps the U.S. must take to use AI responsibly for national security and defense. The Stanford University Institute for Human-Centered AI (HAI) releases its fourth and most comprehensive report of its AI index, which covers global R&D, technical performance, education, and other topics in AI. Peter Layton at the Defence Research Centre in Australia publishes Fighting AI Battles: Operational Concepts for Future AI-Enabled Wars, with a look at war at sea, land, and air. Drone Wars in the UK and the Centre for War Studies in Denmark release Meaning-Less Human Control: Lessons from Air Defence Systems on Meaningful Human Control for the Debate of AWS, examining automation and autonomy in 28 air defense systems used around the world. And the European Union Agency for Cybersecurity publishes a report on Cybersecurity Challenges in the Uptake of AI in Autonomous Driving. In research, scientists demonstrate that an organism without a nervous systems, slime mold, can encode memory of its environment through the hierarchy of its own tube diameter structure. And the Fun Site of the Week uses GPT-3 to generate classic “title/description/question” thought experiments. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
29 Jul 2022 | AI-chemy 2: This Time It's Personal (Part 2) | 00:18:09 | |
Dr. Anya Fink from CNA’s Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector. Report CNA: A Technological Divorce: The impact of sanctions and the end of cooperation on Russia’s technology and AI sector.
| |||
21 Aug 2020 | Elementary, Dear GPT | 00:44:03 | |
In COVID-related AI news, Andy and Dave discuss survey from Amazon Web Surveys that examines the current status of Internet of Things applications related to COVID-19, include scenarios that might help to reduce the severity of an outbreak. MIT publishes an combinatorial machine learning method to maximize the coverage of a COVID-19 vaccine. In “quick takes” on research, Andy and Dave discuss research from Microsoft, University of Washington, and UC Irvine, which provides a checklist to help identify bugs in natural language processing algorithms. A paper from Element AI and Stanford examines whether benchmarks for natural language systems actually correspond to how we use those systems. University of Illinois at Urbana-Champaign, Columbia University, and US Army Research Lab introduce GAIA, which processes unstructured and heterogeneous multimedia data and creates a coherent knowledge base, and allows for text queries. Research published in Nature Neuroscience examines the brain connectivity of 130 mammalian species and finds efficiency of information transfer through the brain does not depend on the size or structure of any specific brain. And finally, Andy and Dave spend some time talking about the broader implications of GPT-3, the experiments that people are conducting with it, and how it is not an AGI. Dave concludes with an analogy from Star Trek: the Next Generation, that he gets mostly correct, though he misattributes Geordi La Forge’s action to Dr. Pulaski. If only he had a positronic matrix! Click here to visit our website and explore the links mentioned in the episode. | |||
20 May 2022 | El Gato Altinteligento | 00:42:55 | |
Andy and Dave discuss the latest in AI news and research, starting with the European Parliament adopting the final recommendations of the Special Committee on AI in a Digital Age (AIDA), finding that the EU should not always regulate AI as a technology, but use intervention proportionate to the type of risk, among other recommendations [1:31]. Synchron enrolled the first patient in the U.S. clinical trial of its brain-computer interface, Stentrode, which does not require drilling into the skull or open brain surgery; it is, at present, the only company to receive FDA approval to conduct clinical trials of a permanently implanted BCI [4:14]. MetaAI releases its 175B parameter transformer for open use, Open Pre-trained Transformers (OPT), to include the codebase used to train and deploy the model, and their logbook of issues and challenges [6:25]. In research, DeepMind introduces Gato, a “single generalist agent,” which with a single set of weights, is able to complete over 600 tasks, including chatting, playing Atari games, captioning images, and stacking blocks with a robotic arm; one DeepMind scientist used the results to claim that “the game is over” and it’s all about scale now, to which others that using massive amounts of data as a substitute for intelligence is perhaps “alt intelligence [8:48].” In the opinion essay of the week, Steve Johnson pens “AI is mastering language, should we trust what it says [18:07]?” Daedalus’s Spring 2022 issue focuses on AI and Society, with nearly 400 pages and over 25 essays on a variety of AI-related topics [19:06]. And finally, Professor Ido Kanter from Bar-Ilan University joins to discuss his latest neuroscience research, which suggests a new model for how neurons learn, using dendritic branches [20:48]. RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7. Apply: Sr. Research Specialist (Artificial Intelligence Research) - ESDA Division | |||
22 Jan 2021 | The Persistence of Memor-E | 00:43:51 | |
In COVID-related AI news, Andy and Dave discuss an editorial in The Lancet Digital Health, which examines whether preliminary models add clinical value to health-care systems. In regular AI news, an Italian court rules that the European food delivery app Deliveroo used a “discriminatory” algorithm, potentially opening the door for liability even with unintentional algorithmic discrimination. A study from Google, OpenAI, Apple, Stanford, Berkeley, and Northeastern shows that large language models trained on public data can expose personal information, by making it possible to extract specific pieces of training data. In research, OpenAI combines the mini-GPT algorithm DALL-E with an image-to-text algorithm CLIP, to create an extremely powerful and flexible generative model, capable of generating high-quality images based on text instructions. The report of the week comes from the Connections 2020 Conference proceedings, which examined Representing AI in Wargames. The survey of the week looks at neural network interpretability. Kevin Murphy provides the book of the week, with Probabilistic Machine Learning: An Introduction. And Geoff Hinton speak on Eye on AI with Craig S. Smith about his latest research and the future of AI. Click here to visit our website and explore the links mentioned in the episode. | |||
10 Feb 2023 | Up, Up, and Autonomy! | 00:37:19 | |
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School’s Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, to include a discussion about the DoDD 3000.09 update and the high-altitude balloon incident. https://www.cna.org/our-media/podcasts/ai-with-ai
| |||
23 Sep 2022 | Keep Watching the AIs! | 00:36:25 | |
Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK’s National Cyber Security Centre, providing a set of security principles for developers implementing machine learning models. Gartner publishes the 2022 update to its “AI Hype Cycle,” which qualitatively plots the position of various AI efforts along the “hype cycle.” PromptBase opens its doors, promising to provide users with better “prompts” for text-to-image generators (such as DALL-E) to generate “optimal images.” Researchers explore the properties of vanadium dioxide (VO2), which demonstrates volatile memory-like behavior under certain conditions. MetaAI announces a nascent ability to decode speech from a person’s brain activity, without surgery (using EEG and MEG). Unitree Robotics, a Chinese tech company, is producing its Aliengo robotic dog, which can carry up to 11 pounds and perform other actions. Researchers at the University of Geneva demonstrate that transformers can build world models with fewer samples, for example, able to generate “pixel perfect” predictions of Pong after 120 games of training. DeepMind AI demonstrates the ability to teach a team of agents to play soccer by controlling at the level of joint torques and combine it with longer-term goal-directed behavior, where the agents demonstrate jostling for the ball and other behaviors. Researchers at Urbana-Champaign and MIT demonstrate a Composable Diffusion model to tweak and improve the output of text-to-image transformers. Google Research publishes results on AudioLM, which generates “natural and coherent continuations” given short prompts. And Michael Cohen, Marcus Hutter, and Michael Osborne published a paper in AI Magazine, arguing that dire predictions about the threat of advanced AI may not have gone far enough in their warnings, offering a series of assumptions on which their arguments depend.
| |||
09 Apr 2021 | Guise of the Machines | 00:36:28 | |
Andy and Dave discuss the latest in AI news, including a report that systematically examined 62 studies on COVID-19 ML methods (from a pool o 2200+ studies), and found that none of the models were of potential clinical use due to methodological flaws or underlying biases. MIT and Amazon identify pervasive label errors in popular ML datasets (such as MNIST, CIFAR, Imagenet) and demonstrate that models may learn systematic patterns of label error in order to improve their accuracy. DARPA’s Air Combat Evolution program upgrades its virtual program to include new weapons systems and multiple aircraft, with live Phase 2 tests on schedule for later in 2021. Researchers at the University of Waterloo and Northeastern University publish research working toward self-walking robotic exoskeletons. British researchers add a buccinators (cheek) muscle to robotic busts to better synchronize speech and mouth movements. Russian Promobot is developing hyper-realistic skin for humanoid robots. And Anderson Cooper takes a tour of Boston Dynamics. In research, Leverhulme, Cambridge, Imperial College London, and DeepMind UK publish research on the direct human-AI comparison in the animal-AI environment, using human children ages 6-10 and animal-AI agents across 10 levels of task groupings. Josh Bongard and Michael Levin publish Living Things Are Not (20th Century) Machines, a thought piece on updating how we think of machines and what they *could* be. Professors Jason Jones and Steven Skiena are publishing a running AI Dashboard on Public Opinion of AI. The Australian Department of Defence publishes A Method for Ethical AI in Defence. Raghavendra Gadagkar publishes Experiments in Animal Behavior. And Peter Singer and August Cole publish An Eye for a Storm, envisioning a future of professional military education for the Australian Defence Force. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
08 Jan 2021 | Pork Rewinds | 00:39:20 | |
Just in time for the holidays, Andy and Dave look back and some of the more memorial AI-related stories from 2020. They begin with the passing of mathematician John Conway, creator of The Game of Life, who died in April at 82 from complications due to COVID-19; Andy and Dave will talk more about The Game of Life in next week’s podcast. With an example of how not to use AI, in July, the International Baccalaureate Educational Foundation turned to machine learning algorithms to predict student grades, due to COVID-related cancelations of actual testing, much to the frustration of numerous students and parents. Also in July, over 1400 mathematicians signed and delivered a letter to the American Mathematical Society, urging researchers to stop working on predictive-policing algorithms. In September, Elon Musk demonstrated the latest iteration of Neuralink, complete with pig implantees. And finally, Andy and Dave examine the GPT family algorithms with a discussion on GPT-2 and GPT-3. Click here to visit our website and explore the links mentioned in the episode. | |||
07 Oct 2022 | Rebroadcast: AI-chemy 2: This Time It's Personal (Part 2) | 00:18:09 | |
Dr. Anya Fink from CNA’s Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector. | |||
09 Sep 2022 | NOMARS Attacks! | 00:38:48 | |
Andy and Dave discuss the latest in AI news and research, starting with DARPA moving into Phase 2 of its No Manning Required Ship (NOMARS) program, having selected Serco Inc for its Defiant ship design. The UK releases a roadmap on automated vehicles, Connected & Automated Mobility 2025, and describes new legislation that will place liability for the actions of self-driving vehicles onto manufacturers, and not the occupants. The DOD’s Chief Digital and AI Office is preparing to roll out Tradewinds, an open solutions marketplace geared toward identifying new technologies and capabilities. The US bans NVIDIA and AMD from selling or exporting certain types of GPUs (mostly for high-end servers) to China and Russia. A report in Nature examines the “reproducibility crisis” involving machine learning in scientific articles, identifying eight types of “data leaks” in research that raise cause for concern. Google introduces a new AI image noise reduction tool that greatly advances the state of the art for low lighting and resolution images, using RawNeRF, which makes use of the previous neural radiance fields approach, but on raw image data. Hakwan Lau and Oxford University Press make available for free In Consciousness We Trust: the Cognitive Neuroscience of Subjective Experience. And Sam Bendett joins Andy and Dave to discuss the latest from Russia’s Army 2022 Expo and other recent developments around the globe.
| |||
30 Apr 2021 | Xen and the Art of Motorcell Maintenance | 00:40:14 | |
Andy and Dave discuss the latest in AI news, including the European Commission’s proposal for the regulation of AI. A report in Nature Medicine examines the limitations of the evaluation process for medical devices using AI that the FDA approves. Researchers at MIT translate spider webs into sounds to explore how spiders might sense their world, and they using machine learning to classify sounds by spider activities. An NIH panel releases its preliminary ethics rules on making brain-like structures such as neural organoids and neural transplants, and finds little evidence that these structures experience humanlike consciousness or pain. And Andy and Dave spend some time with xenobioticists Sam Kriegman and Doug Blackiston, who discuss the motivations and findings behind their latest generation of xenobots, synthetic living machines that they have been designing and building in their labs. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
| |||
21 May 2021 | Doggone | 00:39:35 | |
Andy and Dave discuss the latest in AI news, including a new AI website from the White House at AI.gov, which provides a variety of resources on recent reports, news, key US agencies, and other information. The U.S. Navy destroys a surface vessel using a swarm of drones (in combination with other weapons) for the first time. The NYPD announces the retirement of its Boston Dynamics robot dog (Digidog) due to negative public reaction at its use. The French Defence Ministry releases a report on the Integration of Autonomy into Lethal Weapon Systems. A paper in Digital Medicine examines the use of decision-aids in clinical settings. Matt Ginsberg (along with the Berkeley NLP Group) develops Dr. Fill, an algorithm that won this year’s American Crossword Puzzle Tournament, with three total errors. And the University of Glasgow publishes research on using return echoes over time to render a 3D image of an environment. Researchers use MRI and machine learning to identify brain activation configurations for 12 different cognitive tasks. Facebook AI Research, Inria, and Sorbonne publish research on emerging properties of self-supervised vision transformers, which includes the ability to segment objects with no supervision or segmentation-targeted objectives. Florian Jaton publishes The Constitution of Algorithms: Ground-Truthing, Programming, Formulation, which examines how algorithms come to be. Melanie Mitchell publishes a paper on Why AI Is Harder Than We Think. And UneeQ creates a Digital Einstein for people to interact with. Click here to visit our website and explore the links mentioned in the episode. | |||
19 Nov 2021 | Face/Off | 00:40:13 | |
Andy and Dave discuss the latest in AI news and research, including the Defense Innovation Unit releasing Responsible AI Guidelines in Practice, which seeks to ensure tech contractors adhere to the Department of Defense’s existing ethical principles for AI [0:53]. “Meta” (the Facebook re-brand) announces that it will end its use of facial recognition software and delete data on more than a billion people, though it will retain the technology for other products in its metaverse [3:12]. Australia’s information and privacy commissioners release an order to Clearview AI to stop collecting facial biometrics from Australian citizens and to destroy all existing data [5:16]. The U.S. Marine Corps releases a Talent Management 2030 report, which describes the need for more cognitively mature Marines and seeks to “leverage the power of AI,” and to be “at the vanguard of service efforts to operationalize AI [7:39].” DOD releases at 2021 Report on Military and Security Developments Involving the People’s Republic of China, which describes China’s use of AI technology in influence operations, the digital silk road, military capabilities, and more [10:46]. A competition using unrestricted adversarial examples at the 2021 Conference on Computer Vision and Pattern Recognition includes as co-authors several members of the Army Engineering University of the People’s Liberation Army [11:43]. Research from Okinawa and Australia demonstrates that deep reinforcement learning can produce accurate quantum control, even with noisy measurements, using a small particle moving in a double-well. [14:31] MIT Press makes available a nearly 700-page book, Algorithms for Decision Making, organized around four sources of uncertainty (outcome, model, state, and interaction) [18:01]. And Dr. Amanda Kerrigan and Kevin Pollpeter join Andy and Dave to discuss their latest research in what China is doing with AI technology, including a bi-weekly newsletter on the topic, and a preliminary analysis on China’s view of Intelligent Warfare [20:06]. https://www.cna.org/CAAI/audio-video
| |||
06 Nov 2020 | Thunderbots | 00:27:14 | |
Sam Bendett joins Andy and Dave to discuss the latest developments and happenings in Russia's research into artificial intelligence and autonomy capabilities. They discuss Russia's national strategy and the challenges that have occurred in programmatic implementation due to COVID impacts. They also discuss the status of higher education in Russia and the standing of various institutions, as well as their relationship and interaction with the global community of researchers. They cover a variety of other trends and topics, including the Army 2020 convention and some of the announcements made during that event; and they discuss CNA's Russia Program and its on-going series of newsletters dedicated to summarizing the latest in Russian advances and research in AI. Click here to visit our website and explore the links mentioned in the episode. | |||
06 Aug 2021 | Rebroadcast: the social bot network | 00:19:46 | |
Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national security community. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
22 Oct 2021 | K9mm | 00:35:19 | |
Welcome to Season 5.0 of AI with AI! Andy and Dave discuss the latest in AI news and research, including. The White House calls for an AI “bill of rights,” and invites comments for information. In its 4th year, Nathan Benaich and Ian Hogarth publish their State of AI Report, 2021. [1:50] OpenAI uses reinforcement learning from human feedback and recursive task decomposition to improve algorithms’ abilities to summarize books. [3:14] IEEE Spectrum publishes a paper that examines the diminishing returns of deep learning, questioning the long-term viability of the technology. [5:12] In related news, Nvidia and Microsoft release a 530 billion-parameter style language model, the Megatron-Turing Natural Language Generation model (MT-NLG). [6:54] DeepMind demonstrates the use of a GAN in improving high-resolution precipitation “nowcasting.” [10:05] Researchers from Waterloo, Guelph, and IIT Madras publish research on deep learning that can identify early warning signals of tipping points. [11:54] Military robot maker Ghost Robots creates a robot dog with a rifle, the Special Purpose Unmanned Rifle, or SPUR. [14:25] And Dr. Larry Lewis joins Dave and Andy to discuss the latest report from CNA on Leveraging AI to Mitigate Civilian Harm, which describes the causes of civilian harm in military operations, identifies how AI could protect civilians from harm, and identifies ways to lessen the infliction of suffering, injury, and destruction overall. [16:36] Follow the link below to visit our website and explore the links mentioned in the episode. | |||
14 May 2021 | Superhumans | 00:15:06 | |
Andy's out this week, but Dave recently had a chance to do a series of interviews on a paper that he wrote, Superhumans, Implications of genetic engineering and human-centered bioengineering. So this week's podcast will feature a rebroadcast of the interview that Dave had on Titillating Sports. A big thanks to Rick Tittle and Darren Peck from the Sports Byline USA Network for conducting the interview and for allowing us to share it. Rick and Dave discuss the latest and greatest in genetic engineering and human-centered technology and talk about some of the near-term and far-term implications. | |||
08 Oct 2021 | Where the Dan Board Are | 00:38:47 | |
Andy and Dave discuss the latest in AI news and research, including, the UK government releases its National AI Strategy, a 10-year plan to make the country a global AI superpower [1:28]. Stanford University’s One Hundred Year Study on AI Project releases its second report, Gathering Strength, Gathering Storms, assessing developments in AI between 2016 and 2021 around fourteen framing questions. [4:57] The UN High Commissioner for Human Rights calls for a moratorium on the sale and use of AI systems that pose series risks to human rights until adequate safeguards are put into place. [10:07] Jack Poulson at Tech Inquiry maps out US government use of AI-based weapons and surveillance, using publicly available information. [12:07] Researchers at Hebrew University examine the potential of single cortical neurons as deep artificial neural networks, finding that a deep neural network with 5-8 layers are necessary to approximate them. [16:10] Researchers at Stanford review the different architectures of neuronal circuits in the human brain, identifying different circuit motifs. [20:02] Other research at Stanford shows the ability to image and track moving non-line-of-sight objects using a single optical path (shining a laser through a keyhole). [22:05] And researchers at MIT, Nvidia, and Technion demonstrate that a neural network can identify the number and activity of people in a room, solely by examining a blank wall in the room. [26:33] The Nils Theory research group publishes Physics-Based Deep Learning, introducing physical models into deep learning to reconcile data-centered viewpoints with physical simulations. [30:34] Ori Cohen compiles the Machine and Deep Learning Compendium, an open resource (GitBook) on over 500 topics with summaries, links, and articles. [32:21] The Allen Institute for AI releases a web tool that converts PDF papers into HTML for more rapid web publishing of scientific papers. [33:20] And the Museum of Wild and Newfangled Art: This Show is Curated by a Machine invites viewers to ponder on why they think an AI chose the works within. [34:43] | |||
24 Sep 2021 | AI Today Podcast: Interview with Andy Ilachinski and David Broyles, hosts of the AI with AI podcast | 01:00:30 | |
Andy and Dave were recently interviewed on the AI Today podcast.
| |||
14 Jan 2022 | Three Amecas! | 00:39:59 | |
Andy and Dave discuss the latest in AI news and research, including the signing of the 2022 National Defense Authorization Act, which contains a number of provisions related to AI and emerging technology [0:57]. The Federal Trade Commission wants to tackle data privacy concerns and algorithmic discrimination and is considering a wide range of options to do so, including new rules and guidelines [4:50]. The European Commission proposes a set of measures to regulate digital labor platforms in the EU. Engineered Arts unveils Ameca, a gray-faced humanoid robot with “natural-looking” expressions and body movements [7:07]. And DARPA launches its AMIGOS project, aimed at automatically converting training manuals and videos into augmented reality environments [13:16]. In research, scientists at the Bar-Ilan University in Israel upend conventional wisdom on neural responses by demonstrating that the duration of the resting time (post-excitation) can exceed 20 milliseconds, that the resting period is sensitive to the origin of the input signal (e.g. left versus right), and that the neuron has a sharp transition from the refractory period to full responsiveness without an intermediate stutter phase [15:30]. Researchers at Victoria University use brain cells to play Pong using electric signals and demonstrate that the cells learn much faster than current neural networks, reaching the same point living systems reach after 10 or 15 rallies, vice 5000 rallies for computer-based AIs [19:37]. MIT researchers present evidence that ML is starting to look like human cognition, comparing various aspects of how neural networks and human brains accomplish their tasks [24:34]. And OpenAI creates GLIDE< a 3.5B parameter text-to-image generation model to generate even higher quality images than DALL-E, though it still has trouble with “highly unusual” scenarios [29:30]. The Santa Fe Institute publishes The Complex Alternative: Complexity Scientists on the COVID-19 Pandemic, 800 pages on how complexity interwove through the pandemic [33:50]. And Chris Peter has an algorithm to create a short movie after watching Hitchcock’s Vertigo 20 times [35:22]. Please visit our website to explore the links mentioned in this episode. | |||
12 Feb 2021 | Tempus Fluit | 00:35:32 | |
In COVID-related AI news, Andy and Dave discuss research from Texas &AM, Wisconsin-Milwaukee, and SNY Binghamton, which demonstrates an automatic system for monitoring the physical distance and face mask wearing of construction workers; demonstrating how surveillance is rapidly becoming a widely available commodity technology. In regular news, the National Security Commission on AI releases its draft final report, which makes sweeping recommendations on AI as a constellation of technologies. The nominee for Deputy Secretary of Defense, Kathleen Hicks, mentions AI and the JAIC at several points during her testimony. The Information Technology & Innovation Foundation releases a report on “Who Is Winning the AI Race,” using 30 different metrics to assess nations’ progress in AI. Amnesty International launches a campaign against facial recognition, dubbed “Ban the Scan.” And Scatter Lab pulls its Korean chatbot Lee Luda, after it started responding with racist and sexist comments to user inputs. In three “quick” research items, researchers at Massachusetts General Hospital and Harvard Medical School show that single neurons can encode information about others’ beliefs. Researchers at MIT and the Institute of Science and Technology Austria introduce a new class of time-continuous recurrent neural network models, which they dub liquid time-constant networks; the approach reduces the size of networks by nearly two orders of magnitude for some tasks. And researchers at the University of Toronto, Microsoft Research, and Cornell University show that Maia, a custom version of AlphaZero, can learn to predict human actions, rather than the most likely winning move. The report of the week looks at The Immigration Preferences of Top AI Researchers. And the book of the week contains almost 40 chapters and 60 authors on a variety of special operations-related topics, in Strategic Latency Unleashed. | |||
24 Jul 2020 | Life Is Like a Box of Matrices | 00:36:44 | |
Andy and Dave start with COVID-related AI news, and efforts from the Roche Data Science Coalition for UNCOVER (the United Network for COVID-19 Data Exploration and Research), which includes a dataset of a curated collection of over 200 publicly available COVID-19 related datasets; efforts from Akai Kaeru are included. The Biomedical Engineering Society publishes an overview of emerging technologies to combat COVID-19. Zetane Systems uses machine learning to search the DrugVirus database and information from the National Center for Biotechnology to identify existing drugs that might be effective against COVID. And researchers at the Walter Reed Army Institute of Research are using machine learning to narrow down a space of 41 million compounds to identify candidates for further testing. And the IEEE hosted a conference on 9 July, “Does your COVID-19 tracing app follow you forever?” In non-COVID-related AI news, MIT takes offline the TinyImages dataset, due to its inclusion of derogatory terms and images. The second (actually first) wrongful arrest from facial recognition technology (again by the Detroit Police Department) comes to light. Appen Limited releases its annual “State of AI and ML” report, with a look at how businesses are (or aren’t) considering AI technologies. Anaconda releases its 2020 State of Data Science survey results. And the International Baccalaureate Educational Foundation turn to machine learning algorithms to predict student grades, due to COVID-related cancelations of actual testing, and much to the frustration of numerous students and parents. Research from the Vector Institute and the University of Toronto tackles analogy and the Raven Progressive Matrices with an ensemble of three neural networks for objects, attributes, and relationships. Researchers at the University of Sydney and the Imperial College London have established CompEngine, a collection of time-series data (over 24,000 initially) from a variety of fields, and have placed them into a common feature space; CompEngine then self-organizes the information based on empirical properties. Garfinkel, Shevtsov, and Guo make Modeling Life available for free. Meanwhile, Russell and Norvig release the not-so-free 4th Edition of AI: A Modern Approach. Lex Fridman interviews Norvig in a video podcast. And the Elias Henriksen creates the Computer Prophet, which generates metaphors from a database of collected sayings. Click here to visit our website and explore the links mentioned in the episode. | |||
18 Jun 2021 | No Time to AI | 00:36:33 | |
Andy and Dave discuss the latest in AI news, starting with the US Consumer Products Safety Commission report on AI and ML. The Deputy Secretary of Defense outlines Responsible AI Tenets, along with mandating the JAIC to start work on four activities for developing a responsible AI ecosystem. The Director of the US Chamber of Commerce’s Center for Global Regulatory Cooperation outlines concerns with the European Commission’s newly drafted rules on regulating AI. Amnesty International crowd-sources an effort to identify surveillance cameras that the New York City Police Department has in use, resulting in a map of over 15,000 camera locations. The Royal Navy uses AI for the first time at sea against live supersonic missiles. And the Ghost Fleet Overlord unmanned surface vessel program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy and Dave for a discussion on their latest report, which takes a comprehensive look at the ecosystem of AI in Russia, including its policies, resourcing, infrastructure, and activities. | |||
25 Feb 2022 | Short Circuit RACER | 00:43:49 | |
Andy and Dave discuss the latest in AI news and research, starting with the Aircrew Labor In-Cockpit Automation System (ALIAS) program from DARPA, which flew a UH-60A Black Hawk autonomously and without pilots on board, to include autonomous (simulated) obstacle avoidance [1:05]. Another DARPA program, Robotic Autonomy in Complex Environments with Resiliency (RACER) entered its first phase, focused on high-speed autonomous driving in unstructured environments, such as off-road terrain [2:39]. The National Science Board releases its State of U.S. Science and Engineering 2022 report, which shows the U.S. continues to lose its leadership position in global science and engineering [4:30]. The Undersecretary of Defense for Research and Engineering, Heidi Shyu, formally releases its technology priorities, 14 areas grouped into three categories: seed areas, effective adoption areas, and defense-specific areas [6:31]. In research, OpenAI creates InstructGPT in an attempt to align language models to follow human instructions better, resulting in a model with 100x fewer parameters than GPT-3 and provided a user-favored output 70% of the time, though still suffering from toxic output [9:37]. DeepMind releases AlphaCode, which has succeeded in programming competitions with an average ranking in the top 54% across 10 contests with more than 5,000 participants each though it approaches the problem through more of a brute-force approach [14:42]. DeepMind and the EPFL’s Swiss Plasma Center also announce they have used reinforcement learning algorithms to control nuclear fusion (commanding the full set of control coils of a tokamak magnetic controller). Venture City publishes Timelapse of AI (2028 – 3000+), imagining how the next 1,000 years will play out for AI and the human race [18:25]. And finally, with the Russia-Ukraine conflict continuing to evolve, CNA’s Russia Program experts Sam Bendett and Jeff Edmonds return to discuss what Russia has in its inventory when it comes to autonomy and how they might use it in this conflict, wrapping up insights from their recent paper on Russian Military Autonomy in a Ukraine Conflict [22:52]. Listener Note: The interview with Sam Bendett and Jeff Edmonds was recorded on Tuesday, February 22 at 1 pm. At the time of recording, Russia had not yet launched a full-scale invasion of Ukraine. https://www.cna.org/news/AI-Podcast
| |||
15 Jul 2022 | AI-chemy 2: This Time It's Personal | 00:23:50 | |
Andy and Dave discuss the latest in AI news and research, including an update from DARPA on its Machine Common Sense program, demonstrating rapidly adapting to changing terrain, carrying dynamic loads, and understanding how to grasp objects [0:55]. The Israeli military fields new tech from Camero-Tech that allows operators to ‘see through walls,’ using pulse-based ultra-wideband micro-power radar in combination with an AI-based algorithm for tracking live targets [5:01]. In autonomous shipping [8:13], the Suzaka, a cargo ship powered by Orca AI, makes a nearly 500-mile voyage “without human intervention” for 99% of the trip; the Prism Courage sails from the Gulf of Mexico to South Korea “controlled mostly” by HiNAS 2.0, a system by Avikus, a subsidiary of Hyundai; and Promare’s and IBM’s Mayflower Autonomous Ship travels from the UK to Nova Scotia. In large language models [10:09], a Chinese research team unveils a 174 trillion parameter model, Bagualu (‘alchemist pot’) and claims it runs an AI model as sophisticated as a human brain (not quite, though); Meta releases the largest open-source AI language model, with OPT-66B, a 66 billion parameter model; and Russia’s Yandex opens its 100 billion parameters YaLM to public access. Researchers from the University of Chicago publish a model that can predict future crimes “one week in advance with about 90% accuracy” (referring to general crime levels, not specific people and exact locations), and also demonstrate the potential effects of bias in police response and enforcement [13:32]. In a similar vein, researchers from Berkeley, MIT, and Oxford publish attempts to forecast future world events using the neural network system Autocast, and show that forecasting performance still comes in far below a human expertise baseline [16:37]. Angelo Cangelosi and Minoru Asada provide the (graduate) book of the week, with Cognitive Robotics. | |||
26 Mar 2021 | Diplomachine | 00:33:29 | |
Andy and Dave discuss the latest in AI news, including the release of the U.S. Navy and Marine Corps Unmanned Campaign Framework, which describes the desired approach to developing and deploying unmanned systems. Google employees demand stronger laws to protect AI researchers, in the wake of the firings of Gebru and Mitchell. Hour One debuted technology that creates fully digital and photorealistic AI personas for the purposes of content creation, such as welcome receptionist or information desk. Pennsylvania state law now allows for autonomous delivery robots to use sidewalks and operate on roads. The U.S. Army announces the availability of a training set for facial recognition that also includes thermal camera images, which it will make available for “valid scientific research.” In research, Facebook AI demonstrates an algorithm capable of human-level performance in Diplomacy (no-press), using an equilibrium search to reason about what the other players are reasoning; the algorithm achieved a rank of 23 out of 1,128 human players. Researchers in Helsinki and Germany explore the effects of the Uncanny Valley, suggesting that a robot’s appearance changes how humans judge its decisions. The Resource of the Week comes via Pete Skomoroch, who pointed out that Wikipedia contains a massive list of datasets for machine learning research (along with useful summary details about the dataset). The Book of the Week is Telling Stories, with authors from around the globe bringing culturally different perspectives on tales of AI. And the Videos of the Week come from MIT, which has published its Introduction to Deep Learning course online, with free access.
Click here to visit our website and explore the links mentioned in the episode. | |||
04 Dec 2020 | Underbyte | 00:33:02 | |
In COVID-related AI news, Andy and Dave discuss research from MIT, BIM, and Harvard Medical School, which uses machine learning on Reddit posts to track the pandemic’s impact on mental health. And the UK and is planning to use AI to spot dangerous side effects in COVID vaccinations. In non-COVID AI news, Andy and Dave take a look at how the AI-based poll predictions faired in the 2020 US election. The White House issues guidance for federal agencies on AI applications. The University of Copenhagen makes Carbontracker available, which provides an estimate of the energy consumption for training deep learning algorithms. DARPA selects 5 teams to head to the next phase of its Air Combat Evolution competition. And the 34th Neural Information Processing Systems (NeurIPS) plans for virtual proceedings in early December. In research, 40 authors from Google publish findings on the challenges of deploying an AI system into the real world, such as unexpectedly poor behavior, which they attribute to underspecification. The Marine Corps University Press releases the second volume of Destination Unknown. Andy’s “vintage magazine of the week” is the April 1985 of Byte, which covered Artificial Intelligence. And Matt Stone and Trey Parker introduce Sassy Justice, a parody comedy which warns of the dangers of deepfakes, by itself being a series of deepfakes (including President Trump, Facebook CEO Mark Zuckerberg, former Vice President Al Gore, and many others). Click here to visit our website and explore the links mentioned in the episode. | |||
07 Aug 2020 | Bots Behaving Badly | 00:37:54 | |
In COVID-related AI news, Tencent AI Labs publishes a “machine learning” model that can predict the risk of a coronavirus patient developing severe illness. Unsupervised machine learning on data from the U.K.’s COVID Symptom Tracker, which has more than 4 million users, suggests patients cluster into roughly 6 different symptom types. Amazon Web Services releases its version of a scientific literature search on COVID-19. Aminer.org offers an open access knowledge graph of COVID-19. And “Digital Contact Tracing for Pandemic Response” takes a look at global approaches and results with implementing contact tracing. In regular AI news, the National Security Commission on AI releases its latest quarterly report, with 35 recommendations. The latest Congressional Research Service Report covers Emerging Military Technologies, including AI and LAWS. Facebook rolls out a “bot army” to simulate “bad behavior” on a parallel version of its platform, in an effort to understand and combat online abuse. In research, DeepMind publishes findings on reinforcement learning, with a meta-learning approach that discovers an update rule that includes “what to predict” as well as “how to learn from it.” Research from Berkeley, DeepMind, and MIT explores exploration by comparing how children learn with reinforcement learning agents in a unified environment. Military Review publishes an article by Courtney Crosby, which describes a framework for operationalizing AI for algorithmic warfare. DeepMind and University College London examines deep reinforcement learning and its implications for neuroscience. And MIT makes available online a full lecture series by Marvin Minsky on “The Society of Mind.” Click here to visit our website and explore the links mentioned in the episode. | |||
23 Apr 2021 | Donkey Pong | 00:39:14 | |
Andy and Dave discuss the latest in AI news, including the National Intelligence Council’s 7th Edition Global Trends 2040 Report, which sprinkles the importance of AI and ML throughout future trends. A BuzzFeed report claims that the NYPD has misled the public about its use of the facial recognition tool, Clearview AI, having run over 5100 searches with the tool. European Activist Groups ask the European Commission to ban facial recognition completely, with calls to protect “fundamental rights” in Europe. A report in Digital Medicine examines the diagnostic accuracy of deep learning in medical imaging studies, and calls for an immediate need to develop AI guidelines. Neuralink demonstrates the latest with its brain-computer interface device with a demonstration that shows a monkey playing Pong with his brain. And the Director of the JAIC, Lt Gen Groen, and the co-chair of the NSCAI, Bob Work, spoke for about an hour on the use and direction of AI in the Department of Defense. In research, Andrew Jones examines how different parameters scale with board games, identifying the scaling of scaling laws. Research for AIST, Tokyo Institute of Technology, and Tokyo Denki University demonstrate that they can pre-train a CNN using no natural images, but instead using digital images created using fractals. In the paper of the week, Ben Goertzel provides his general theory of general intelligence. And the fun site of the week features the 1996 game, “Creatures,” with a look into the AI that made them come alive. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
02 Jul 2021 | Rebroadcast: Xen and the Art of Motorcell Maintenance | 00:40:25 | |
Andy and Dave discuss the latest in AI news, including the European Commission’s proposal for the regulation of AI. A report in Nature Medicine examines the limitations of the evaluation process for medical devices using AI that the FDA approves. Researchers at MIT translate spider webs into sounds to explore how spiders might sense their world, and they using machine learning to classify sounds by spider activities. An NIH panel releases its preliminary ethics rules on making brain-like structures such as neural organoids and neural transplants, and finds little evidence that these structures experience humanlike consciousness or pain. And Andy and Dave spend some time with xenobioticists Sam Kriegman and Doug Blackiston, who discuss the motivations and findings behind their latest generation of xenobots, synthetic living machines that they have been designing and building in their labs. | |||
20 Nov 2020 | A.I. in the Sky | 00:35:56 | |
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI. Click here to visit our website and explore the links mentioned in the episode. | |||
19 Mar 2021 | Datalore SemaFor | 00:32:46 | |
Andy and Dave discuss the latest in AI news, including an announcement from Facebook AI that it achieved state of the art computer vision performance with its SEER model, by learning from one billion (with a ‘b’) random, unlabeled, and uncurated public Instagram images, reaching 84% top-1 accuracy on 13k images from ImageNet. DARPA launches a new Perceptually-enabled Task Guidance (PTG) to help humans perform complex tasks (such as through augmented reality); the effort will include both fundamental research as well as integrated demonstrations. DARPA also announces research teams from its Semantic Forensics (SemaFor) effort at probing media manipulations. Chris Ume, a Belgian visual effects artist, releases four deepfake videos of Tom Cruise, using two NVIDIA GPUs, two months training time, and further days of processing and tweaking for each clip. Researchers at the University of Washington, Berkeley, and Google Research use the StyleGAN2 framework to create “time-travel photography,” which peels away the limitations of early cameras to reveal restored images of the original photos; the effort also involves the creation of a modern “sibling,” which then gets merged with the original. OpenAI publishes the discovery that neurons in its CLIP network respond to the same concept, whether literal, symbolic (e.g., a sketch) , or conceptual (e.g., text); they also discover an absurdly simple attack, which involves places a stick with a word onto an item. The report of the week from UNICEF looks at Adolescent Perspectives on AI, with insights from 245 adolescents from five countries. Montreal.AI provides a 33-page “cheat sheet” with condensed information and links on AI topics. The book of the week from E-IR examines Remote Warfare: Interdisciplinary Perspectives. And the fun site of the week, MyHeritage, lets users animate photos, or “re-animate your dead loves ones.” Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode. | |||
19 Feb 2021 | D.E.R.Y.L. | 00:34:55 | |
In news, Andy and Dave discuss a machine learning algorithm from Synergies Intelligent System and Universität Hamburg that can identify people in a moving crowd who are mostly likely asymptomatic carriers of COVID-19. US lawmakers have introduced the Public Health Emergency Privacy Act, to boost privacy protections for COVID-19 technology such as tracing apps and vaccine scheduling apps. A team led by researchers from Oxford have introduced new reporting guidelines to bridge a gap in development to implementation when using clinical AI technologies, dubbed DECIDE-AI. Over 30 authors from a wide swath of organizations have proposed a “living benchmark” to evaluate progress in natural language generation, which they call GEM (Generation, Evaluation, and Metrics). And the combination we saw coming, research from Queen Mary University demonstrate a deep learning framework for detection of emotion using wireless signals. Researchers at the University of Virginia claim to detect physiological responses to racial bias with 76.1% accuracy, though it more focuses on exploring any link between mental associations of skin color. In research, Stanford researchers explore how learning and evolution occur in complex environments, and how they affect the diversity of morphological forms, with DERL (Deep Evolutionary Reinforcement Learning). Researchers from University of Illinois, Urbana-Champaign, introduce GANs for editing images via their latent space, which provides greater control over editing (e.g., editing a mouth without re-generating the entire face). And in the video of the week, a 12-minute video provides a short history on DARPA with highlights on many of its military robot programs. | |||
16 Oct 2020 | The Robohattan Project | 00:35:31 | |
The bipartisan Future of Defense Task Force releases its 2020 report, which includes the suggestion of using the Manhattan Project as a model to develop AI technologies. Facebook AI releases Dynabench as a way to dynamically benchmark the performance of machine learning algorithms. Amsterdam and Helsinki launch AI registers that explain how they use algorithms, in an effort to increase transparency. In research, the Allen Institute of AI, University of Washington, and University of North Carolina publish research on X-LXMERT (learning cross-modality encoder representations from transformers), which trains GPT-3 on both text and images, to then generate images from scratch by providing descriptions (e.g., a large clock tower in the middle of a town). Researchers at Swarthmore College and Los Alamos National Labs demonstrate the challenges that neural networks of various sizes have in learning Conway’s Game of Life. Maria Jeansson, Claudio Sanna, and Antoine Cully create a stunning visual infographic on the “automated futures” technologies. And the Joshua Epstein, a longtime expert in agent-based modeling, provides the European Social Stimulation Association Award Keynote speech. Click here to visit our website and explore the links mentioned in the episode. | |||
28 May 2021 | Rebroadcast: A.I. in the Sky | 00:36:08 | |
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI. | |||
31 Dec 2021 | Rebroadcast: AI Today, Tomorrow, & Forever | 00:42:57 | |
Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education. | |||
16 Dec 2022 | The Kwicker Man | 00:32:18 | |
Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of “AI” and many more requirements for the Department of Defense. DoD has also awarded its cloud-computing contracts, not to one company, but four – Amazon, Google, Microsoft, and Oracle. At the end of November, the San Francisco Board voted to allow the police force to use robots to administer deadly force, however, after a nearly immediate response from a “No Killer Robots” campaign, in early December the board passed a revised version of the policy that prohibits police from using robots to kill people. Israeli company Elbit unveils its LANIUS drone, a “drone-based loitering munition” that can carry lethal or non-lethal payloads, and appears to have many functions similar to the ‘slaughter bots,’ except for autonomous targeting. Neuralink shows the latest updates on its research for putting a brain chip interface into humans, with demonstrations of a monkey manipulating a mouse cursor with its thoughts; the company also faces a federal investigation into possible animal-welfare violations. DeepMind publishes AlphaCode in Science, a story that we covered back in February. DeepMind also introduces DeepNash, an autonomous agent that can play Stratego. OpenAI unleashes ChatGPT, a spin-off of GPT-3 optimized for answering questions through back-and-forth dialogue. Meanwhile, Stack Overflow, a website for programmers, temporarily banned users from sharing responses generated by ChatGPT, because the output of the algorithm might look good, but it has “a high rate of being incorrect.” Researchers at the Weizmann Institute of Science demonstrate that, with a simple neural network, it is possible to reconstruct a “large portion” of the actual training samples. NOMIC provides an interactive map to explore over 6M images from Stable Diffusion. Steve Coulson creates “AI-assisted comics” using Midjourney. Stay tuned for AI Debate 3 on 23 December 2022. And the video of the week from Ricard Sole at the Santa Fe Institute explores mapping the cognition space of liquid and solid brains.
| |||
09 Oct 2020 | Tell-Tale Heart | 00:25:47 | |
In COVID-related AI news, Youyang Gu provides world- and county-level COVID-19 predictions using machine learning, along with a rolling examination of accuracy. In regular AI news, a military coalition of 13 countries meets to discuss the use of and ethics of AI. Orcan Intelligence provides a deeper look into Europeans’ concerns about AI technologies. Ben Lee and the Library of Congress unveil the full open version of the Newspaper Navigator, which provides access to 1.56 million photographs from newspapers. Research from Intel and Binghamton University uses the pulse of the beating heart to identify deep fake videos with a 97% accuracy. And Arthur Holland Michel publishes the Black Box, Unlocked: Predictability and Understandability in Military AI. Click here to visit our website and explore the links mentioned in the episode. | |||
13 Aug 2021 | XLand, Simulation of Sweet Adventures | 00:32:10 | |
Andy and Dave discuss the latest in AI news, including a story from MIT Technology Review (which echoes observations made previously on AI with AI) that “hundreds of AI tools have been built to catch COVID. None of them helped.” DeepMind has used its AlphaFold program to identify the structure for 98.5 percent of roughly 20,000 human proteins and will make the information publicly available. The Pentagon makes use of machine learning algorithms to create decision space in the latest of its Global Information Dominance Experiments. An Australian court rules that AI systems can be “inventors” under patent law (but not “owners”), and South Africa issues the world’s first patent to an “AI System.” The United States Special Operations Command put 300 of its personnel through a unique six-week crash course in AI, including leaders such as Google CEO Eric Schmidt and former Defense Secretary Ash Carter. And President Biden nominates Stanford professor Ramin Toloui, who has experience with AI technologies and impacts, as an Assistant Secretary of State for business. In research, DeepMind develop agents capable of “open-ended learning” in XLand, an environment with diverse tasks and challenges. A survey from the Journal of AI Research finds that AI researchers have varying amounts of trust in different organizations, companies, and governments. The Journal of Strategic Studies dedicates an issue to Emerging Technologies, with free access. Mine Cetinkaya-Rundel and Johanna Hardin make an Introduction to Modern Statistics open access with an option (or with proceeds going to OpenIntro, a US-based nonprofit). And Iyad Rahwan curates a collection of evil AI cartoons. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
26 Aug 2022 | EPIC BLOOM | 00:35:19 | |
Andy and Dave discuss the latest in AI and autonomy news and research, including an announcement that the Federal Trade Commission is exploring rules for cracking down on harmful commercial surveillance and lax data security, with the public having an opportunity to share input during a virtual public form on 8 September 2022. The Electronic Privacy Information Center (EPIC), with help from Caroline Kraczon, releases The State of State AI Policy, a catalog of AI-related bills that states and local governments have passed, introduced or failed during the 2021-2022 legislative season. In robotics, Xiaomi introduces CyberOne, a 5-foot 9-inch robot that can identify “85 types of environmental sounds and 45 classifications of human emotions.” Meanwhile at a recent Russian arms fair, Army-2022, a developer showed off a robot dog with a rocket-propelled grenade strapped to its back. NIST updates its AI Risk Management Framework to the second draft, making it available for review and comment. DARPA launches the SocialCyber project, a hybrid-AI project aimed at helping to protect the integrity of open-source code. BigScience launches BLOOM (BigScience Large Open-science Open-access Multilingual Language Model), a “bigger than GPT-3” multilanguage (46) model that a group of over 1,000 AI researchers has created, that anyone can download and tinker with it for free. Researchers at MIT develop artificial synapses that shuttle protons, resulting in synapses 10,000 times faster than biological ones. China’s Comprehensive National Science Center claims that it has developed “mind-reading AI” capable of measuring loyalty to the Chinese Communist Party. Researchers at the University of Sydney demonstrate that human brains are better at identifying deepfakes than people, by examining results directly from neural activity. Researchers at the University of Glasgow combine AI with human vision to see around corners, reconstructing 16x16-pixel images of simple objects that the observer could not directly see. GoogleAI publishes research on Minerva, using language models to solve quantitative reasoning problems, and dramatically increasing the SotA. Researchers from MIT, Columbia, Harvard, and Waterloo publish work on a neural network that solves, explains, and generates university math problems “at a human level.” CSET makes available the Country Activity Tracker for AI, an interactive tool on tech competitiveness and collaboration. And a group of researchers at Merced’s Cognitive and Information Sciences Program make available Neural Networks in Cognitive Science.
| |||
30 Oct 2020 | Lone Hacker and Child | 00:31:38 | |
In COVID-related AI news, Andy and Dave discuss the COVID-19 Grand Challenge from C3.ai. In non-COVID AI news, the Department of Defense releases its Data Strategy. The Defense Science Board publishes a report on Counter Autonomy. The National Security Commission on AI releases its 3rd Quarter interim report and recommendations. The Center for Security and Emerging Technology releases a report on Building Trust through Testing. And the US Patent and Trademark Office publishes the responses to its initial queries, in Public Views on AI and Intellectual Property Policy. Researchers from MIT and Berkeley explore the idea that children’s learning has analogy to hacking, by making code better through an open-ended set of goals and activities. Nathan Benaich and Ian Hogarth release the State of AI Report 202, which examines the latest developments in AI research across a variety of areas (such as observing that only 15% of papers publish their code). And Taylor and Dorin publish Rise of the Self-Replicators: Early Visions of Machines, AI and Robots that Reproduce and Evolve. Click here to visit our website and explore the links mentioned in the episode. | |||
18 Sep 2020 | [Abstraction Intensifies] | 00:36:40 | |
In COVID-related AI news, a report from Cambridge University and University of Manchester examines recent studies on using chest x-rays and CTs scans to detect and diagnose COVID, and finds that only 29 of 168 studies had reproducible results; the report further found that all of the studies had high or unclear risk of bias, such that none of the studies had value for use in clinics. CSET provides an overview of how China has used AI in its COVID-19 response. In non-COVID AI news, a GAO report finds systemic problems with facial recognition technology at U.S. airports. The University College of London provides an overview of AI’s use in crime, with deepfakes ranked as the most concerning. Researchers at the University of Warwick and the Alan Turing Institute develop a machine learning algorithm to identify potential planets from astronomy data. And NASA uses an algorithm to predict more accurately when hurricanes will rapidly intensify. In research, MIT, MIT-IBM Watson AI Lab, and Columbia University present a machine learning model to abstract relations in videos about everyday actions. Researchers in the Netherlands demonstrate that (large!) adversarial patches can work for surveillance imagery of military assets on the ground. The UN Interregional Crime and Justice Research Institute releases a Special Collection on AI. Researchers in Germany and Korea provide a view of continual and open-world learning. And Georgia Tech provides the People Map as a way to discover research expertise at an institution. Click here to visit our website and explore the links mentioned in the episode. | |||
28 Jan 2022 | Xenadu | 00:41:55 | |
Andy and Dave discuss the latest in AI news and research, including an update from the DARPA OFFSET (OFFensive Swarm-Enabled Tactics) program, which demonstrated the use of swarms in a field exercise, to include one event that used 130 physical drone platforms along with 30 simulated [0:33]. DARPA’s GARD (Guaranteeing AI Robustness against Deception) program has released a toolkit to help AI developers test their models against attacks. Undersecretary of Defense for Research and Engineering, Heidi Shyu, announced DoD’s technical priorities, including AI and autonomy, hypersonics, quantum, and others; Shyu expressed a focus on easy-to-use human/machine interfaces [3:35]. The White House AI Initiative Office opened an AI Public Researchers Portal to help connect AI researchers with various federal resources and grant-funding programs [8:44]. A Tesla driver faces felony charges (likely a first) for a fatal crash in which Autopilot was in use, though the criminal charges do not mention the technology [12:23]. In research, MIT’s CSAIL publishes (worrisome) research on high scoring convolution neural networks that still achieve high accuracy, even in the absence of “semantically salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo, published a white paper surveying recent developments in Collective Intelligence for Deep Learning [19:46]. Roman Garnett makes available a graduate-level book on Bayesian Optimization. And Doug Blackiston returns to chat about the latest discoveries with the Xenobots research and kinematic self-replication [21:54]. | |||
12 Aug 2022 | Searching for Robot Pincher | 00:45:49 | |
Andy and Dave discuss the latest in AI news and research, including an announcement from DeepMind that it is freely providing a database of 200+ million protein structures as predicted by AlphaFold. Researchers at the Max Planck Institute for Intelligent Systems demonstrate how a robot dog can learn to walk in about one hour using a Bayesian optimization algorithm. A chess-playing robot breaks the finger of a seven-year-old boy during a chess match in Moscow. A bill with the Senate Armed Services Committee would require the Department of Defense to accelerate the fielding of new technology to defeat drone swarms. The Chief of Naval Operations Navigation Plan 2022 aims to add 150 uncrewed vessels by 2045. The text-to-image transformer DALL-E is now available in beta. Researchers at Columbia University use an algorithm to identify possible state variables from the observation of systems (such as a double pendulum) and discover “alternate physics”; the algorithm discovers the intrinsic dimension of the observed dynamics and identifies a candidate set of state variables, but in most cases, the scientists found it difficult (if not impossible) to decode those variables to known phenomena. Wolfram Media and Etienne Bernard make Introduction to Machine Learning: Mathematica available for free. And Jeff Edmonds and Sam Bendett join for a discussion on their latest report, Russian Military Autonomy in Ukraine: Four Months In – a closer look at the use of unmanned systems by both Russia and Ukraine. https://www.cna.org/our-media/podcasts/ai-with-ai | |||
25 Nov 2022 | The AI Who Loved Me | 00:30:44 | |
Andy and Dave once again welcome Sam Bendett, research analyst with CNA’s Russia Studies Program, to the podcast to discuss the latest unmanned and autonomous news from the Ukraine and Russian conflict. The group discusses the use and role of commercial quadcopters, the recent Black Sea incident involving unmanned systems, and the supply of Iranian systems to Russia. They also discuss the Wagner Group’s Research and Development center, and its potential role in the Ukraine-Russian conflict. Will Ukraine deploy lethal autonomous drones against Russia? PMC Wagner Center: Russia's Lancet: Coordinated drone attack at Sevastopol: Iranian supply of drones to Russia: Russia's "brain drain" problem: | |||
15 Jan 2021 | Always Look on the Bright Side of Life | 00:35:53 | |
In COVID-related news, Andy and Dave discuss a commercial AI model from Biocogniv that predicts COVID-19 infection using only blood tests, with a 95% sensitivity and a 49% specificity. In a story that highlights the general challenge with algorithms, Stanford reported challenges in using a rules-based algorithm to determine priority of vaccine distribution, when it omitted front-line doctors from initial distribution. In non-COVID AI news, Vincent Boucher and Gary Marcus organize a second “AI Debate” on the topic of Moving AI Forward: An Interdisciplinary Approach, which included Daniel Kahneman, Christof Koch, Judea Pearl, Fei-Fei Li, Margaret Mitchel, and many others. Reuters reports that Google’s PR, policy, and legal teams have been editing AI research papers in order to give them a more positive tone, and to reduce discussions of the potential drawbacks of the technology. And Microsoft patents a “chat bot technology” that would seek to reincarnate deceased people. In research, Google announces MuZero, which masters chess, Go, shogi, and the Atari Learning Environment by planning with a learned model (and no information on the rules). Jeff Heaton provides the book of the week, with Applications of Deep Neural Networks. A survey paper from four universities looks at Data Security for Machine Learning. Another survey paper examines how researchers develop and use datasets for machine learning research. And the ConwayLife.com community celebrates the 50th anniversary of the Game of Life, to include an online simulator called the Exploratorium. Click here to visit our website and explore the links mentioned in the episode. | |||
26 Feb 2021 | The Low-Res Valley | 00:37:05 | |
In AI news, researchers from the University of Copenhagen develop a machine learning model that estimates the chances of risk of death due to COVID at various stages of a hospital stay, including a 80 percent accuracy whether a patient with COVID will require a respirator. The Joint AI Center has a double-announcement, with the Tradewind Initiative, which seeks to develop an acquisition ecosystem to speed the delivery of AI capabilities, and with Blanket Purchase Agreements for AI testing and evaluation services. Kaggle publishes a survey on the 2020 State of Data Science and ML, which examines information from ~2000 data scientists about their jobs and their experiences. PeopleTec releases an “Overhead MNIST,” a dataset containing benchmark satellite imagery for 10 categories (parking lots, cars, plans, storage tanks, and others). Epic’s Unreal Engine introduces the MetaHuman Creator for release later this year, which purports to create ultra-realistic visuals for virtual human characters; Andy uses the moment to describe the “Uncanny Valley,” which the Epic tech might manage to leap out of. And researchers from Carnegie Mellon and George Washington show that, like language transformers, image representations contain human-like biases. In research, researchers at the Israel Institute of Technology create a Ramanujan Machine, which can generate conjectures for mathematical constants, without proof. Researchers demonstrate initial steps of reconstructing video from brain activity. The report of the week examines U.S. public opinion on AI, with views on declining support for development and divided views on facial recognition. DeepMind London approaches the topic of common sense from the viewpoint of animals. And the book of the week comes from the author of the aforementioned paper, Murray Shanahan, and his 2010 book Embodiment and the Inner Life. Listeners Survey: https://bit.ly/3bqyiHk | |||
05 Nov 2021 | The Ode to Decoy | 00:37:33 | |
Andy and Dave discuss the latest in AI news and research, including: NATO releases its first AI strategy, which included the announcement of a one billion euro “NATO innovation fund.” [0:52] Military research labs in the US and UK collaborate on autonomy and AI in a combined demonstration, integrating algorithms and automated workflows into military operations. [2:58] A report from CSET and MITRE identifies that the Department of Defense already has a number of AI and related experts, but that the current system hides this talent. [6:45] The National AI Research Resource Task Force partners with Stanford’s Human-Centered AI and the Stanford Law School to publish Building a National AI Research Resource: A Blueprint for the National Research Cloud. [6:45] And in a trio of “AI fails,” a traffic camera in the UK mistakes a woman for a car and issues a fine to the vehicle’s owner; [9:10] the Allen Institute for AI introduces Delphi as a step toward developing AI systems that behave ethically (though it sometimes thinks that it’s OK to murder everybody if it creates jobs); [10:07] and a WSJ report reveals that Facebook’s automated moderation tools were falling far short on accurate identification of hate speech and videos of violence and incitement. [12:22] Ahmed Elgammal from Rutgers teams up with Playform to compose two movements for Beethoven’s Tenth Symphony, for which the composer left only sketches before he died. And finally, Andy and Dave welcome Dr. Heather Wolters and Dr. Megan McBride to discuss their latest research on the Psychology of (Dis)Information, with a pair of publications, one providing a primer on key psychological mechanisms, and another examining case studies and their implications. The Psychology of (Dis)information: A Primer on The Psychology of (Dis)information: Case Studies Follow the link below to visit our website and explore the links mentioned in the episode. | |||
23 Oct 2020 | PROGRESS Out of the Blue | 00:35:32 | |
Andy and Dave have a chat with Chad Jenkins, Professor of Computer Science and Engineering at the University of Michigan, Director of the Laboratory for Perception, RObotics, and Grounded REasoning SystemS (PROGRESS), and newest member of CNA's Board of Trustees. They discuss Chad's background and his current research at Michigan, which includes interactive robot systems and human-robot interaction. And then they discuss a variety of topics ranging from movement primitives, neural networks and fat tails, the issue of reinvention, students' experiences with AI research and the role of historical research, the culture of research in AI, and much more. Click here to visit our website and explore the links mentioned in the episode. | |||
09 Jul 2021 | Journey to the Cause of Reason | 00:36:31 | |
Andy and Dave discuss the latest in AI news, including research from the San Diego School of Medicine, which used an AI algorithm to analyze terabytes of gene expression data in response to viral infections, identifying 20 genes that predict the severity of a patient’s response (across many different viruses). Deputy Secretary of Defense Kathleen Hicks announces a new AI and Data Acceleration initiative, which includes operational data teams and flyaway technical experts. China says it has AI fighter jet pilots that can beat human pilots in simulated dogfights. A study from Stanford estimates the density of CCTV cameras in large cities around the globe (by using computer vision algorithms on street view image data). NIST held a workshop on AI Measurement and Evaluation, with an interesting 22-page read-ahead document. Appen updates its State of AI and Machine Learning report, examining various business-related views and metrics on AI, showing a general maturing of the AI market. Researchers from Tubingen and Max Planck show that the behavioral difference between human and machine vision is narrowing, but still has room for improvement (particularly with out-of-distribution data). Researchers from Stanford, University of College London, and MIT develop a counterfactual simulation model to provide quantitative predictions on how people think about causation, possibly serving as a bridge between psychology and AI. Adam Wagner uses a reinforcement learning approach to search for examples that would disprove conjectures in graph theory and finds examples that disprove five such conjectures. Justin Solomon’s Numerical Algorithms provides the core methods for machine learning. And Budiansky publishes a look at the life of Kurt Gödel, in Journey to the Edge of Reason. Follow the link below to visit our website and explore the links mentioned in the episode. | |||
03 Jul 2020 | Dust in the Mind | 00:31:47 | |
For COVID-related AI news, Andy and Dave discuss the Stanford Social Innovation Review report on the problem with COVD-19 AI solutions (e.g., data gaps, inconsistency, etc), and how to fix them. The National Endowment for Science Technology and the Arts (NESTA) provides a thorough report on AI and COVID-19, whose findings generally suggest that barriers might exist for the employment of AI in tackling COVID-19. In regular AI news, the US has its first known case of an erroneous arrest due to facial recognition technology, with the arrest of Robert Williams in Detroit in January 2020 (and disclosed on 24 June). The European Commission white paper on AI gets two more responses, from Facebook and from the Center for Data Innovation. Sergei Ivanov provides a breakdown of contributors for the upcoming International Conference on Machine Learning. Researchers have identified a new threat vector against neural networks, one that increases energy consumption and latency. And a follow-up with the Pulse upsampling tool shows a bias toward producing white faces, likely inherited from its training dataset, StyleGAN. In research, Denny Britz examines replicability issues in AI research, and how academic incentive systems are driving the AI research community toward certain types of research. The Marine Corps University Journal turns into the Journal of Advanced Military Studies, and its first issue focuses on innovation and future war. The Combat Studies Institute Press publishes On Strategy: A Primer, including a chapter on future war by Mick Ryan. And Major Nicholas Narbutovskih pens Dust, a story about two warring factions with different approaches to autonomous systems. Click here to visit our website and explore the links mentioned in the episode. | |||
24 Feb 2023 | All Good Things | 00:28:29 | |
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years! |