Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

Joscha Bach and Anthony Aguirre on Digital Ph...

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Int ...   Show more

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman’s results on the un ...   Show more

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons

Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.  Topics discussed in this episode include: -The current state of the ...   Show more

John Prendergast on Non-dual Awareness and Wisdom for the 21st Century

John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the p ...   Show more

Beatrice Fihn on the Total Elimination of Nuclear Weapons

Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapo ...   Show more

Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021. Topics discussed in this episode include: -FLI's perspectives on 2020 and ...   Show more

Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox

The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their p ...   Show more

Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus fa ...   Show more

Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity

Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation. Topics discussed in this episode include: -How Big Tobacco us ...   Show more

Maria Arpa on the Power of Nonviolent Communication

Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication. Topics discussed in this episode include: -What nonviolent communication (NVC) consists of ...   Show more

Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.  Topics discussed in this episode include ...   Show more

Kelly Wanser on Climate Change as a Possible Existential Threat

Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.  Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Clima ...   Show more

Andrew Critch on AI Research Considerations for Human Existential Safety

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community vi ...   Show more

Iason Gabriel on Foundational Philosophical Questions in AI Alignment

In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we deci ...   Show more

Peter Railton on Moral Learning and Metaethics in AI Systems

From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivit ...   Show more

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of ...   Show more

Barker - Hedonic Recalibration (Mix)

This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the p ...   Show more

Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pear ...   Show more

Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative t ...   Show more

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power and the need to cultivate the wisdom with which to steer ...   Show more

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents ...   Show more

FLI Podcast: On Superforecasting with Robert de Neufville

Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effective ...   Show more

AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned syst ...   Show more

FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insu ...   Show more

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview ...   Show more

AIAP: On Lethal Autonomous Weapons with Paul Scharre

Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make ...   Show more

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its benefic ...   Show more

AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and poli ...   Show more

FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the wor ...   Show more

AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of lig ...   Show more

Weekly Motivation by Ben Lionel Scott
TED Talks Daily
The Mindset Mentor
Anything Goes with Emma Chamberlain
E
Stories Of The Soul Podcast
VIEWS with David Dobrik & Jason Nash
E
أبونا داود لمعي
On Purpose with Jay Shetty
B For Better Health