The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focu... Meer
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focu... Meer
Beschikbare afleveringen
5 van 175
Dan Hendrycks on Why Evolution Favors AIs over Humans
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely. You can read more about Dan's work at https://www.safe.ai
Timestamps:
00:00 Corporate AI race
06:28 Evolutionary dynamics in AI
25:26 Why evolution applies to AI
50:58 Deceptive AI
1:06:04 Competition erodes safety
10:17:40 Evolutionary fitness: humans versus AI
1:26:32 Different paradigms of AI risk
1:42:57 Interpreting AI systems
1:58:03 Honest AI and uncertain AI
2:06:52 Empirical and conceptual work
2:12:16 Losing touch with reality
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
8-6-2023
2:26:37
Roman Yampolskiy on Objections to AI Safety
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?
1:12:27 Will AI accidents increase?
1:23:56 Will we know who was right about AI?
1:33:33 Difference between AI output and AI model
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
26-5-2023
1:42:13
Nathan Labenz on How AI Will Transform the Economy
Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai
Timestamps:
00:00 Economic transformation from AI
11:15 Productivity increases from technology
17:44 AI effects on employment
28:43 Life without jobs
38:42 Losing contact with reality
42:31 Catastrophic risks from AI
53:52 Scaling AI training runs
1:02:39 Stable opinions on AI?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
11-5-2023
1:06:54
Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at
https://www.cognitiverevolution.ai
Timestamps:
00:00 The cognitive revolution
07:47 Red teaming GPT-4
24:00 Coming to believe in transformative AI
30:14 Is AI depth or breadth most impressive?
42:52 Potential near-term dangers from AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
4-5-2023
59:43
Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures
Timestamps:
00:00 How does venture capital work?
09:01 Failure and success for startups
13:22 Is overconfidence necessary?
19:20 Repeat entrepreneurs
24:38 Long-term investing
30:36 Feedback loops from investments
35:05 Timing investments
38:35 The hardware-software dichotomy
42:19 Innovation prizes
45:43 VC lessons for philanthropy
51:03 Creating new markets
54:01 Investing versus philanthropy
56:14 Technology preying on human frailty
1:00:55 Are good ideas getting harder to find?
1:06:17 Artificial intelligence
1:12:41 Funding ethics research
1:14:25 Is philosophy useful?
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.