Can machines become conscious? And if they do, what kind of moral relationship should we have with them? Will conscious agents be responsible for a whole new level of noetic and ethical experience?
In this second installment on the AI Alignment Problem, Nick Baguley and I delve into the philosophy and science of machine consciousness. We explore whether AI systems could possess a subjective inner life—and if so, whether alignment should be reimagined as moral resonance instead of mere goal matching. We discuss how mindfulness, memory, embodiment, and suffering shape our understanding of what it means to be sentient—and how we might experiment and monitor such capacities in artificial systems. We even propose a fractional factorial designed experiment to start at Boston Dynamics and OpenAI.
You’ll leave this episode with a deeper understanding of consciousness, how it might arise, and what it might mean to extend moral standing to synthetic minds.
Discussion about this post
No posts