We intend to do to consciousness research what Chemistry did to Alchemy. This involves working through the confusion in the field, identifying key threads to pull, and building a formal foundation for speaking about phenomenology in precise and useful ways.
Roughly speaking, this involves:
- First-principles philosophy: clarifying what the goal of consciousness research is and the nature of knowledge in this field, identifying phenomenological natural kinds, building a naturalized theory of epistemology, and working through issues of identity (key works: Principia Qualia, The Pseudo-Time Arrow, Against Functionalism, Qualia Formalism in the Water Supply, Why do contemplative practitioners make so many metaphysical claims?);
- Improving neuroscience: applying considerations from our theoretical work in order to extend and unify existing neuroscience models (key works: A Future for Neuroscience, The Hyperbolic Geometry of DMT Experiences, Seed Ontologies, The Neuroscience of Meditation);
- Building better neurotech: applying insights from our work in neuroscience in order to identify better ways to measure what’s going on in the mind, and better intervention points for managing this (key works: Quantifying Bliss, several unannounced projects).
In short- better philosophy of mind should lead to better neuroscience, and better neuroscience should lead to better neurotech.
Much of our research on consciousness focuses on emotional valence (pain/pleasure), for two reasons: first, it’s of particular moral and practical importance. Second, we believe this is plausibly the most tractable place to start reverse-engineering phenomenology. Our core outputs here are the Symmetry Theory of Valence (STV), and an empirical paradigm for applying STV to the brain (CDNS).
One way to quickly get a sense of our research is to skim our intellectual lineages, the existing threads of research we’ve woven together to create our unique approach. Another way would be to watch the videos of our talks from TSC2018. The most thorough way would be to read through the work on our research page, starting with Principia Qualia and Quantifying Bliss.
Why is QRI’s work important?
We believe understanding consciousness is one of the most important and pressing tasks for humanity, an enabling condition for fixing many problems of the present and for building good futures.
On a personal scale, there’s a great deal of suffering in the world, and understanding what suffering is seems crucial for reducing it. Second, there’s a saying in business that ‘what you can measure, you can manage’ — and so progress on understanding the structure of phenomenology and nature of suffering should lead to better interventions for mental health. This has been a key research focus for QRI, and we’re proud to have the world’s first first-principles approach for estimating emotional valence from fMRI data (see Quantifying Bliss and A Future for Neuroscience). Aside from this, we’re also working on several unannounced neurotech projects.
It’s also important to note that if we have good first-principles metrics for mood in humans, we can apply them to non-human animals as well, putting animal welfare on a much more quantitative basis.
On a civilizational scale, we seem to be gripped by nihilism, confusion about what futures are worth working toward, and trouble coordinating society toward any goal at all. These questions become all the more pressing as developments in artificial intelligence, genetic engineering, and neurotechnology allow deeper forms of self-authorship. We believe understanding the structure of phenomenology, and the nature of phenomenological value, could help navigate this by:
- Shedding light on what futures are even possible to build, and which of these futures are desirable;
- Uncovering new, powerful Schelling points for positive-sum social and political coordination;
- Offering an antidote to nihilism and emotional hopelessness (which might be crucial for decreasing existential risk from bad actors with advanced technology);
- Enabling clarity (‘deconfusion‘) about what it is that AI safety is trying to do, and identifying which research directions are promising vs which are dead-ends.
How can I get involved?
One of the most important bottlenecks we have right now is funding. If you’re in a position to give, this is one of the most effective force-multipliers for us, funding travel, conferences, internships, and visiting scholars. Currently you can donate via:
We’ll have a link for tax-deductible donations soon.
We also really appreciate anything that contributes to community-building. We’re often focused on research, and it’s easiest to work with self-starters who can essentially create their own role, their own way of contributing. A core goal with QRI’s work is to build a community and research ecosystem around these ideas and we need your help. If you feel like writing about what we’re doing, making videos about our work, helping extend this FAQ, holding meetups / discussion groups, or really anything that seems valuable and doesn’t exist yet, we encourage you to do so!
Finally- join our mailing list: