Powered by RND
PodcastsOnderwijs80,000 Hours Podcast
Luister naar 80,000 Hours Podcast in de app
Luister naar 80,000 Hours Podcast in de app
(2.067)(250 021)
Favorieten opslaan
Wekker
Slaaptimer

80,000 Hours Podcast

Podcast 80,000 Hours Podcast
Rob, Luisa, and the 80,000 Hours team
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherev...

Beschikbare afleveringen

5 van 271
  • If digital minds could suffer, how would we ever know? (Article)
    “I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.Chapters:Introduction (00:00:00)Understanding the moral status of digital minds (00:00:58)Summary (00:03:31)Our overall view (00:04:22)Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)Clearing up common misconceptions (00:12:16)Creating digital minds could go very badly - or very well (00:14:13)Dangers for digital minds (00:14:41)Dangers for humans (00:16:13)Other dangers (00:17:42)Things could also go well (00:18:32)We don't know how to assess the moral status of AI systems (00:19:49)There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)Many plausible theories of consciousness could include digital minds (00:24:16)The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)The scale of this issue might be enormous (00:36:08)Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)Summing up so far (00:52:22)Arguments against the moral status of digital minds as a pressing problem (00:53:25)Two key cruxes (00:53:31)Maybe this problem is intractable (00:54:16)Maybe this issue will be solved by default (00:58:19)Isn't risk from AI more important than the risks to AIs? (01:00:45)Maybe current AI progress will stall (01:02:36)Isn't this just too crazy? (01:03:54)What can you do to help? (01:05:10)Important considerations if you work on this problem (01:13:00)
    --------  
    1:14:30
  • #132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems
    If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. Rebroadcast: this episode was originally released in June 2022.Links to learn more, highlights, and full transcript.As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.Chapters:Cold open (00:00:00)Rob's intro (00:00:52)The interview begins (00:02:44)Why computer security matters for AI safety (00:07:39)State of the art in information security (00:17:21)The hack of Nvidia (00:26:50)The most secure systems that exist (00:36:27)Formal verification (00:48:03)How organisations can protect against hacks (00:54:18)Is ML making security better or worse? (00:58:11)Motivated 14-year-old hackers (01:01:08)Disincentivising actors from attacking in the first place (01:05:48)Hofvarpnir Studios (01:12:40)Capabilities vs safety (01:19:47)Interesting design choices with big ML models (01:28:44)Nova’s work and how she got into it (01:45:21)Anthropic and career advice (02:05:52)$600M Ethereum hack (02:18:37)Personal computer security advice (02:23:06)LastPass (02:31:04)Stuxnet (02:38:07)Rob's outro (02:40:18)Producer: Keiran HarrisAudio mastering: Ben Cordell and Beppe RådvikTranscriptions: Katy Moore
    --------  
    2:41:11
  • #138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter
    What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more.The question is a classic that makes for great dorm-room philosophy discussion. But it’s hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective.Today’s guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself.Rebroadcast: this episode was originally released in September 2022.Links to learn more, highlights, and full transcript.That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations.Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering.As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism’ — has been one of the most enduringly popular ideas in ethics.And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things?Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.”So what convinces Sharon that philosophical hedonism deserves another go? In today’s interview with host Rob Wiblin, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn’t get in an experience machine, nor override an individual’s autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so.Chapters:Cold open (00:00:00)Rob’s intro (00:00:41)The interview begins (00:04:27)Metaethics (00:05:58)Anti-realism (00:12:21)Sharon's theory of moral realism (00:17:59)The history of hedonism (00:24:53)Intrinsic value vs instrumental value (00:30:31)Egoistic hedonism (00:38:12)Single axis of value (00:44:01)Key objections to Sharon’s brand of hedonism (00:58:00)The experience machine (01:07:50)Robot spouses (01:24:11)Most common misunderstanding of Sharon’s view (01:28:52)How might a hedonist actually live (01:39:28)The organ transplant case (01:55:16)Counterintuitive implications of hedonistic utilitarianism (02:05:22)How could we discover moral facts? (02:19:47)Rob’s outro (02:24:44)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore
    --------  
    2:25:43
  • #134 Classic episode – Ian Morris on what big-picture history teaches us
    Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.Why such big systematic changes — and why these changes specifically?That's the question bestselling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.Rebroadcast: this episode was originally released in July 2022.Links to learn more, highlights, and full transcript.There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.In this classic episode, we discuss all of Ian's major books.Chapters:Rob's intro (00:00:53)The interview begins (00:02:30)Geography is Destiny (00:03:38)Why the West Rules—For Now (00:12:04)War! What is it Good For? (00:28:19)Expectations for the future (00:40:22)Foragers, Farmers, and Fossil Fuels (00:53:53)Historical methodology (01:03:14)Falsifiable alternative theories (01:15:59)Archaeology (01:22:56)Energy extraction technology as a key driver of human values (01:37:43)Allowing people to debate about values (02:00:16)Can productive wars still occur? (02:13:28)Where is history contingent and where isn’t it? (02:30:23)How Ian thinks about the future (03:13:33)Macrohistory myths (03:29:51)Ian’s favourite archaeology memory (03:33:19)The most unfair criticism Ian’s ever received (03:35:17)Rob's outro (03:39:55)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
    --------  
    3:40:53
  • #140 Classic episode – Bear Braumoeller on the case that war isn’t in decline
    Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out.But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe.Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age.Rebroadcast: this episode was originally released in November 2022.Links to learn more, highlights, and full transcript.The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours.If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st.Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster.He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone.In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war."In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as:Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect?What would Bear's critics say in response to all this?What do the optimists get right?How does one do proper statistical tests for events that are clumped together, like war deaths?Why are deaths in war so concentrated in a handful of the most extreme events?Did the ideas of the Enlightenment promote nonviolence, on balance?Were early states more or less violent than groups of hunter-gatherers?If Bear is right, what can be done?How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century?Which wars are remarkable but largely unknown?Chapters:Cold open (00:00:00)Rob's intro (00:01:01)The interview begins (00:05:37)Only the Dead (00:08:33)The Enlightenment (00:18:50)Democratic peace theory (00:28:26)Is religion a key driver of war? (00:31:32)International orders (00:35:14)The Concert of Europe (00:44:21)The Bismarckian system (00:55:49)The current international order (01:00:22)The Better Angels of Our Nature (01:19:36)War datasets (01:34:09)Seeing patterns in data where none exist (01:47:38)Change-point analysis (01:51:39)Rates of violent death throughout history (01:56:39)War initiation (02:05:02)Escalation (02:20:03)Getting massively different results from the same data (02:30:45)How worried we should be (02:36:13)Most likely ways Only the Dead is wrong (02:38:31)Astonishing smaller wars (02:42:45)Rob’s outro (02:47:13)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore
    --------  
    2:48:03

Meer Onderwijs podcasts

Over 80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
Podcast website

Luister naar 80,000 Hours Podcast, HELD IN EIGEN VERHAAL en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.6.0 | © 2007-2025 radio.de GmbH
Generated: 2/5/2025 - 4:18:23 AM