What is the Qualia Research Institute doing?

The one-line pitch is we’re working on doing the same thing to consciousness that chemistry did to alchemy. This involves working through the confusion in the field, identifying key threads to pull, and building a formal foundation for speaking about phenomenology in precise and useful ways.

Roughly speaking, this involves:

Much of our research on consciousness focuses on emotional valence (pain/pleasure), for two reasons: first, it’s of particular moral and practical importance. Second, we believe this is plausibly the most tractable place to start reverse-engineering phenomenology. Our core outputs here are the Symmetry Theory of Valence (STV), and an empirical paradigm for applying STV to the brain (CDNS).

One way to quickly get a sense of our research is to skim our intellectual lineages, the existing threads of research we’ve woven together to create our unique approach. Another way would be to watch the videos of our talks from TSC2018. The most thorough way would be to read through the work on our research page, starting with Principia Qualia and Quantifying Bliss.

Why is QRI’s work important?

We believe understanding consciousness is one of the most important and pressing tasks for humanity, an enabling condition for fixing many problems of the present and for building good futures.

On a personal scale, there’s a great deal of suffering in the world, and understanding what suffering is seems crucial for reducing it. Second, there’s a saying in business that ‘what you can measure, you can manage’ — and so progress on understanding the structure of phenomenology and nature of suffering should lead to better interventions for mental health. This has been a key research focus for QRI, and we’re proud to have the world’s first first-principles approach for estimating emotional valence from fMRI data (see Quantifying Bliss and A Future for Neuroscience). In short, if you have a good theory of consciousness you should be able to make predictions that nobody else can, build things no one else is. Aside from this fMRI work, we’re also working on several unannounced projects in this space.

It’s also important to note that if we have good first-principles metrics for mood in humans, we can apply them to non-human animals as well, putting animal welfare on a much more quantitative basis.

On a civilizational scale, we seem to be gripped by nihilism, confusion about what futures are worth working toward, and trouble coordinating society toward any goal at all. These questions become all the more pressing as developments in artificial intelligence, genetic engineering, and neurotechnology allow deeper forms of self-authorship. 

We believe understanding the structure of phenomenology, and the nature of phenomenological value, could help:

  1. Shed light on what futures are even possible to build, and which of these futures are desirable;
  2. Uncover new, powerful Schelling points for positive-sum social and political coordination;
  3. Offer an antidote to nihilism and emotional hopelessness (which might be crucial for decreasing existential risk from bad actors with advanced technology);
  4. Enable clarity (‘deconfusion’) about what it is that AI safety is trying to do, and identify which research directions are promising vs which are dead-ends.

On this last point, Nick Bostrom has famously noted that an advanced AI-optimized society could lead to futures with incredible economic activity, but no conscious beings able to enjoy its fruits:

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.

Max Tegmark has made a similar case:

[T]o wisely decide what to do about AI development, we humans need to confront not only traditional computational challenges, but also some of the most obdurate questions in philosophy. To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident. To program a friendly AI, we need to capture the meaning of life. What is “meaning”? What is “life”? What is the ultimate ethical imperative, i.e., how should we strive to shape the future of our Universe? If we cede control to a superintelligence before answering these questions rigorously, the answer it comes up with is unlikely to involve us.

Many in the AI safety community are working hard on the ‘paperclip problem’, making sure strong AI optimization process don’t run amok- i.e., that there will be a Disneyland. QRI’s work seems like the only live research thread for how to ensure there will be conscious beings to enjoy it.

How can I get involved?

The easiest way is to read our stuff, watch our videos, follow our blogs (Mike, Andrés, Romeo, QRI). 

One of the most important bottlenecks we have right now is funding. If you’re in a position to give, this is one of the most effective force-multipliers for us, funding travel, conferences, internships, and visiting scholars. Currently you can donate via:

bitcoin: bc1qy34c6t04g8pl723s84xk80q3j42p654euejpln 

ether: 0xb18B27Ec64fB08447e0D20184dfFFB383D6604c4

We’ll have a link for tax-deductible donations soon.

Finally, we really appreciate anything that contributes to community-building. We’re often focused on research, and it’s easiest to work with self-starters who can essentially create their own role, their own way of contributing. A core goal with QRI’s work is to build a community and research ecosystem around these ideas and we need your help. If you feel like writing about what we’re doing, making videos about our work, helping extend this FAQ, holding meetups, or really anything that seems valuable and doesn’t exist yet, we encourage you to do so! 

If you have a technical background and find some of our ideas intriguing but a little underdeveloped, we strongly encourage you to see if you can develop them yourself and let us know. Consciousness research is a young field and there’s lots and lots of stuff we just haven’t had the opportunity to do yet, and some of this stuff could be super valuable.