This week on the podcast, we discuss how the conversation about AI risks seems to be shifting away from the catastrophic, existential, wiping-out-of-humanity type of scenarios.
While the X-risk proponents are still out there, media coverage, regulators and the public at large seem to be homing in on more immediate and tangible AI concerns like discrimination, privacy violations, and misinformation – to name a few.
We explore the reasons for this shift, which includes the fact that many people now have first-hand experience of many AI products – and their limitations.
Credits:
Article about over-focus on existential risk (Scientific American) https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/
Article about ASIC trial (Crikey) https://www.crikey.com.au/2024/09/03/ai-worse-summarising-information-humans-government-trial/
Article about California AI safety bill SB 1047 https://www.politico.com/news/2024/09/29/gavin-veto-ai-safety-bill-00181583
Article about Australia’s Voluntary AI Safety Standard (elevenM) https://elevenm.com.au/blog/breaking-down-the-voluntary-ai-safety-standard/
Listen now
Transcript
This is an automatically generated transcript. We make our best efforts to check that it is an accurate reflection of the episode, but it may contain some errors and unedited content.