Look for any podcast host, guest or anyone
Showing episodes and shows of

Pablo Stafforini

Shows

Libros completos, Emilia Escaris PazosLibros completos, Emilia Escaris Pazos1- Ensayo sobre el gobierno civil de John Locke.Trad. Selección y notas: Claudio Amor, Pablo Stafforini. Ed. Prometeo Univ. Nacional de Quilmes, Bs. As, Argentina Presentación. CAPI , CAP II DEL ESTADO DE NATURALEZA (Desde 1 a parte del 6)2023-12-1514 minFuture MattersFuture Matters#8: Bing Chat, AI labs on safety, and pausing Future MattersFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 00:44 A message to our readers. 01:09 All things Bing. 05:27 Summaries. 14:20 News. 16:10 Opportunities. 17:19 Audio & video. 18:16 Newsletters. 18:50 Conversation with Tom Davidson. 19:13 The importance of understanding and forecasting AI takeoff dynamics. 21:55 Start and end points of...2023-03-2141 minFuture Matters ReaderFuture Matters ReaderHolden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luckSuccess without dignity: a nearcasting story of avoiding catastrophe by luck, by Holden Karnofsky. https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding Note: Footnotes in the original article have been omitted.2023-03-2019 minFuture Matters ReaderFuture Matters ReaderLarks — A Windfall Clause for CEO could worsen AI race dynamicsIn this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk. https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics2023-03-2014 minFuture Matters ReaderFuture Matters ReaderOtto Barten — Paper summary: The effectiveness of AI existential risk communication to the American and Dutch publicThis is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming various media interventions. Summary: https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk Original paper: https://existentialriskobservatory.org/papers_and_reports/The_Effectiveness_of_AI_Existential_Risk_Communication_to_the_American_and_Dutch_Public.pdf Note: Some tables in the summary have been omitted in this audio version.2023-03-2007 minFuture Matters ReaderFuture Matters ReaderShulman & Thornley — How much should governments pay to prevent catastrophes? Longtermism's limited roleCarl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future. https://philpapers.org/archive/SHUHMS.pdf Note: Tables, notes and references in the original article have been omitted.2023-03-2057 minFuture Matters ReaderFuture Matters ReaderElika Somani — Advice on communicating in and around the biosecurity policy community"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy." https://forum.effectivealtruism.org/posts/HCuoMQj4Y5iAZpWGH/advice-on-communicating-in-and-around-the-biosecurity-policy Note: Some footnotes in the original article have been omitted.2023-03-1413 minFuture Matters ReaderFuture Matters ReaderRiley Harris — Summary of 'Are we living at the hinge of history?' by William MacAskill.The Global Priorities Institute has published a new paper summary: 'Are we living at the hinge of history?' by William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.2023-03-1407 minFuture Matters ReaderFuture Matters ReaderRiley Harris — Summary of 'Longtermist institutional reform' by Tyler M. John and William MacAskillThe Global Priorities Institute has published a new paper summary: 'Longtermist institutional reform' by Tyler John & William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.2023-03-1405 minFuture Matters ReaderFuture Matters ReaderHayden Wilkinson — Global priorities research: Why, how, and what have we learned?The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.) https://globalprioritiesinstitute.org/hayden-wilkinson-global-priorities-research-why-how-and-what-have-we-learned/2023-03-1344 minFuture Matters ReaderFuture Matters ReaderPiper — What should be kept off-limits in a virology lab?New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk. https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab-accident2023-03-1307 minFuture Matters ReaderFuture Matters ReaderEzra Klein — This changes everything"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough." https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html2023-03-1310 minFuture MattersFuture Matters#7: AI timelines, AI skepticism, and lock-inFuture Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware saf...2023-02-0300 minRed LiderRed LiderWBT150- Panel - Sofia Stafforini, German Lenzi y Pablo ViolazDistribuidores calificados y un coordinador... todos pasando por etapas del negocio y de la vida y superándolas con formación y acción y vinculados al equipo!2023-01-131h 01Future MattersFuture Matters#6: FTX collapse, value lock-in, and counterarguments to AI x-riskFuture Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Ma...2022-12-3037 minFuture MattersFuture Matters#5: supervolcanoes, AI takeover, and What We Owe the FutureFuture Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge...2022-09-1331 minFuture MattersFuture Matters#4: AI timelines, AGI risk, and existential risk from climate changeFuture Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On...2022-08-0831 minFuture MattersFuture Matters#3: digital sentience, AGI ruin, and forecasting track recordsEpisode Notes Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04...2022-07-0400 minFuture MattersFuture Matters#2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy researchFuture Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:01 Welcome to Future Matters 01:25 Schubert — Against cluelessness 02:23 Carlsmith — Presentation on existential risk from power-seeking AI 03:45 Vaintrob — Against "longtermist" as an identity 04:30 Bostrom & Shulman — Propositions concerning digital minds and society 05:02 MacAskill — EA and the current funding situation 05:51 Beckstead — Some clarifications on the Future Fund's appro...2022-05-2823 minFuture MattersFuture Matters#1: AI takeoff, longtermism vs. existential risk, and probability discountingThe remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors. — John Stuart Mill Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Su...2022-04-2429 minFuture MattersFuture Matters#0: Space governance, future-proof ethics, and the launch of the Future Fund> We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star. > — Ralph Waldo Emerson Welcome to Future Matters, a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to the Effective Altruism Forum and available as a podcast. Research We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’...2022-03-2200 min