Powered by RND
PodcastsTechnologieAI Truth Ethics Podcast
Luister naar AI Truth Ethics Podcast in de app
Luister naar AI Truth Ethics Podcast in de app
(2.067)(250 021)
Favorieten opslaan
Wekker
Slaaptimer

AI Truth Ethics Podcast

Podcast AI Truth Ethics Podcast
Alex Tsakiris
AI truth or dare... let's uncover the hidden potential and risks of AI Ethics. www.aitruthethics.com

Beschikbare afleveringen

5 van 7
  • How AI is Humanizing Work |07|
    How AI is Humanizing WorkForget about AI taking your job; instead, imagine AI making your work as fulfilling and exciting as you always hoped it would be. Dan Turchin, CEO of PeopleReign, sat down with Alex Tsakiris of the AI Truth Ethics podcast to discuss the real-world impact of AI in the workplace. Their conversation offers a grounded perspective on AI's role in enhancing human potential rather than replacing it.1. AI as a Tool for Human Enhancement and Work SatisfactionTurchin paints a compelling vision of how AI can transform our work lives:"I believe that the true celebration of humanness at work is if all the friction was gone. And you look at your calendar and it's like all things that you derive energy from, like the things that you were hired to do that you love doing that, that make you do your best work. Like what if just crazy thought experiment? What if that was all that work consisted of?"This perspective shifts the narrative from fear of replacement to the exciting possibility of AI removing mundane tasks, allowing us to focus on work that truly fulfills us. Turchin further emphasizes:"It truly is complementary and I think both of us will be doing a service to humanity if we can allay fears that the bots are coming for you... It couldn't be further from the truth."2. The Importance of Transparency in AIAlex Tsakiris introduces a compelling concept:"Transparency is all you need... I don't need your truth, I don't need Gemini’s truth, just like I don't need Perplexity truth. What I really want to find is my truth, but you can assist me."This highlights the need for AI systems to be transparent about their sources and reasoning, empowering users to make informed decisions rather than accepting AI-generated information and misinformation.3. Ethical Considerations in Enterprise AI ImplementationTurchin reveals the careful approach his company takes to ensure responsible AI use:"We require them to have a human review everything, every task, every capability AI has, because we believe that in addition to us being responsible for what that AI agent can do, the employer has an obligation to protect the health and safety of the employee."This level of caution and human oversight is crucial as AI becomes more integrated into workplace processes, especially in sensitive areas like HR.4. The AI Truth Case: A New FrontierTsakiris proposes an intriguing future direction for AI development:"What I'm pushing towards is really trying to understand what I'm calling the AI truth case... what would it mean if we had an AI-enhanced way of determining the truth?"This concept suggests a potential role for AI in helping us navigate the complex information landscape, not by providing absolute truths, but by offering tools to better assess and understand information.What do you think? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aitruthethics.com
    --------  
    57:01
  • Christof Koch, Damn White Crows! |06|
    Artificial General Intelligence (AGI) is sidestepping the consciousness elephant that isn't in the room, the brain, or anywhere else. As we push the boundaries of machine intelligence, we will inevitably come back to the most fundamental questions about our own experience. And as AGI inches closer to reality, these questions become not just philosophical musings, but practical imperatives.This interview with neuroscience heavyweight Christof Koch brings this tension into sharp focus. While Koch's work on the neural correlates of consciousness has been groundbreaking, his stance on consciousness research outside his immediate field raises critical questions about the nature of consciousness - questions that AGI developers can't afford to ignore.Four key takeaways from this conversation:1. The Burden of Proof in Consciousness StudiesKoch argues for a high standard of evidence when it comes to claims about consciousness existing independently of the brain. However, this stance raises questions about scientific objectivity:"Extraordinary claims require extraordinary evidence... I haven't seen any [white crows], so far all the data I've looked at, I've looked at a lot of data. I've never seen a white coal."Key Question: Does the demand for "extraordinary evidence" have a place in unbiased scientific inquiry, especially with regard to published peer-reviewed work?2. The Challenge of Interdisciplinary ExpertiseDespite Koch's eminence in neuroscience, the interview reveals potential gaps in his knowledge of near-death experience (NDE) research:"I work with humans, I work with animals. I know what it is. EEG, I know the SNR, right? So I, I know all these issues."Key Question: How do we balance respect for expertise in one field with the need for deep thinking about contradictory data sets? Should Koch have degraded gracefully?3. The Limitations of "Agree to Disagree" in Scientific DiscourseWhen faced with contradictory evidence, Koch resorts to a diplomatic but potentially unscientific stance:"I guess we just have to disagree."Key Question: "Agreeing to disagree" doesn't carry much weight in scientific debates, so why did my AI assistant go there?4. The "White Crow" Dilemma in Consciousness ResearchThe interview touches on William James' famous "white crow" metaphor, highlighting the tension between individual cases and cumulative evidence:"One instance of it would violate it. One two instance of, yeah, I totally agree. But we, I haven't seen any..."Key Question: can AI outperform humans in dealing with contradictory evidence?Thoughts? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aitruthethics.com
    --------  
    1:04:22
  • Ben Byford, Machine Ethics Podcast |05|
    Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethical values are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI's role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world. Their discussion offers a timely counterpoint to the AGI hype cycle.Key Points:* AI as an Arbiter of Truth: Promise or Peril? Alex posits that AI can serve as an unbiased arbiter of truth, while Ben cautions against potential dogmatism.Alex: "AI does not b******t their way out of stuff. AI gives you the logical flow of how the pieces fit together."Implication for AGI: If AI can indeed serve as a reliable truth arbiter, it could revolutionize decision-making processes in fields from science to governance. However, the risk of encoded biases becoming amplified at an AGI scale is significant.* The Consciousness Conundrum: A Barrier to True AGI? The debate touches on whether machine consciousness is possible or if it's fundamentally beyond computational reach.Alex: "The best evidence suggests that AI will not be sentient because consciousness in some way we don't understand is outside of time space, and we can prove that experimentally."AGI Ramification: If consciousness is indeed non-computational, it could represent a hard limit to AGI capabilities, challenging the notion of superintelligence as commonly conceived.* Universal Ethics vs. Cultural Relativism in AI Systems They clash over the existence of universal ethical principles and their implementability in AI.Alex: "There is an underlying moral imperative." Ben: "I don't think there needs to be…"Superintelligence Consideration: The resolution of this debate has profound implications for how we might align a superintelligent AI with human values – is there a universal ethical framework we can encode, or are we limited to culturally relative implementations?* AI's Societal Role: Tool for Progress or Potential Hindrance? The discussion explores how AI should be deployed and its potential impacts on human agency and societal evolution.Ben: "These are the sorts of things we don't want AI running because we actually want to change and evolve."Future of AGI: This point raises critical questions about the balance between leveraging AGI capabilities and preserving human autonomy in shaping our collective future. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aitruthethics.com
    --------  
    1:35:44
  • Nathan Labenz from the Cognitive Revolution podcast |04|
    In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That's the question I posed Nathan Labenz from the Cognitive Revolution podcast. Key points:The AI Truth RevolutionAlex Tsakiris argues that AI has the potential to become a powerful tool for uncovering truth, especially in controversial areas:"To me, that's what AI is about... there's an opportunity for an arbiter of truth, ultimately an arbiter of truth, when it has the authority to say no. Their denial of this does not hold up to careful scrutiny."This perspective suggests that AI could challenge established narratives in ways that humans, with our biases and vested interests, often fail to do.The Tension in AI DevelopmentNathan Labenz highlights the complex trade-offs involved in developing AI systems:"I think there's just a lot of tensions in the development of these AI systems... Over and over again, we find these trade offs where we can push one good thing farther, but it comes with the cost of another good thing."This tension is particularly evident when it comes to truth-seeking versus other priorities like safety or user engagement.The Transparency ProblemBoth discussants express concern about the lack of transparency in major AI systems. Alex points out:"Google Shadow Banning, which has been going on for 10 years, indeed, demonetization, you can wake up tomorrow and have one of your videos...demonetized and you have no recourse."This lack of transparency raises serious questions about the role of AI in shaping public discourse and access to information.The Consciousness ConundrumThe conversation takes a philosophical turn when discussing AI consciousness and its implications for ethics. Alex posits:"If consciousness is outside of time space, I think that kind of tees up...maybe we are really talking about something completely different."This perspective challenges conventional notions of AI capabilities and the ethical frameworks we use to approach AI development.The Stakes Are HighNathan encapsulates the potential risks associated with advanced AI systems:"I don't find any law of nature out there that says that we can't, like, blow ourselves up with ai. I don't think it's definitely gonna happen, but I do think it could happen."While this quote acknowledges the safety concerns that dominate AI ethics discussions, the broader conversation suggests that the more immediate disruption might come from AI's potential to challenge our understanding of truth and transparency. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aitruthethics.com
    --------  
    1:41:50
  • Shadow Banning and AI: When Transparency Goes Dark |03|
    Last time, we saw a demonstration of AI misinformation and deception, but this is worse. Shadow banning has long been suspected, but it’s hard to prove. Is that nobody malcontent really being shadowbanned, or does he deserve to be on page four of a Google search for his name? This might be another instance of the AI silver lining effect. LMs seem to have no problem spotting these shenanigans.Key Points:1. The Personal Touch of Shadow Banning"Hey, Gemini, about six months ago I was not being shadow banned by Google slash Gemini, and even though I'm certainly not a big deal or a high profile person, I was able to get a reasonable bio about myself from Gemini..."Our host, Alex Tsakiris, found himself in the crosshairs of shadow banning. 2. The Demonstration"You said you didn't know anything about this person. And then, when I pasted in the bio, you verified every point of the biography, and even added some new ones..." Gemini it's pretty loose with the “truthful and transparent.”3. More Hidden"This hidden alignment problem is even worse than the misinformation and deception we saw last time… this is harder to spot"The implications of Shadow Banning within AI dialogues not only pushes the boundary of AI ethical issues, but may present legal problems for those engaged in the practice. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aitruthethics.com
    --------  
    13:27

Meer Technologie podcasts

Over AI Truth Ethics Podcast

AI truth or dare... let's uncover the hidden potential and risks of AI Ethics. www.aitruthethics.com
Podcast website

Luister naar AI Truth Ethics Podcast, Acquired en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.6.0 | © 2007-2025 radio.de GmbH
Generated: 2/9/2025 - 3:30:32 PM