Factored Cognition

In this project, we explore whether we can solve difficult problems by composing small and mostly context-free contributions from individual agents who don't know the big picture.

What is factored cognition?

Imagine the set of questions you can answer in 15 minutes using a computer. There's a lot you can do. You can look up facts, do calculations, weigh considerations, and thus answer many questions that require a bit of research and deliberation. But there's also a lot you can't do. If a question is about some field of physics you've never heard of, say "What does a field theory look like in which supersymmetry is spontaneously broken?", you probably won't have time to learn enough to give a good answer.

Now, consider a thought experiment: Imagine that during your 15 minutes you can delegate up to 100 tasks to fast copies of yourself. That is, assistants who are just as capable and motivated as you are, and also only have 15 minutes of subjective time, but who are much faster, so that when you delegate a task, you immediately observe their answer. Clearly, you can do a lot more with the help of your 100 assistants than you could on your own. We'll call this a one-step amplification of yourself.

What if we iterated this process, so that each of your assistants in turn had access to 100 assistants, and so on? What capabilities could we implement through iterated amplification, and what tasks would stay out of reach, if any?

Factored cognition refers to mechanisms like this, where sophisticated learning and reasoning is broken down (or factored) into many small and mostly independent tasks.

Why does it matter?

Our mission is to find scalable ways to leverage machine learning for deliberation. This requires that we can view thinking as a task with data that we can train ML systems on. The only concrete way we know for doing that is to record how people deliberate using explicit actions in narrow contexts, i.e. to break deliberation into many little tasks. In that case, we can use ML to imitate what people do and scale up by repeatedly applying the learned policy. This is a central component of Iterated Distillation and Amplification, Paul Christiano's approach to AI Alignment.


We'd like to build general-purpose question-answering systems that produce increasingly helpful solutions as they get access to more human work-hours and better ML algorithms.
The space of features that such systems could use includes recursion, pointers to data and to agents, edits, persistent workspaces, structured content, reflection, meta-execution, and different approaches to interaction with the outside world.
To evaluate whether a system might be scalable, we propose a set of trial tasks: answering questions about books, fact checking, math textbook exercises, cost-benefit analysis, and todo prioritization.
Our best guess for how to build a scalable question-answering system is to use recursion, pointers, edits, persistence, and (later) interaction via indirection and reflection.

Our plan

Concretely, here is what we are working on:

  1. We implement networks of workspaces as a web app.
  2. We try this app with real users on each of the evaluation tasks.
    • Our first goal is to figure out whether piecewise thinking can recover the performance of a single person working over extended periods of time.
    • If that's the case, we want to understand how scalable this sort of scheme is as we increase the number of work-hours that go into it.
  3. We observe what happens—probably failure to solve some or all of the tasks—and improve the app. We may only need to make small tweaks to the mechanism, but it may also turn out that we need to stop making some of our simplifying assumptions, or that we need to reconsider and rebuild on a more fundamental level.
  4. Eventually, we either have a mechanism that is plausibly scalable, or we have a better understanding of the obstructions.

If this sounds exciting, join us. We're hiring!


  • Factored Cognition (May 2018)
    A presentation we gave at CHAI and at a Deepmind/FHI seminar, motivating the project from an AI alignment angle and reporting on the state of work in May 2018.
  • Mosaic
    A prototype for a web app that supports recursive decomposition of questions with pointers.
  • Patchwork
    An open-source command-line app for recursive decomposition of questions, built to be the foundation for a web app with multiple users and automation.
  • Affable
    The Haskell-based successor to Patchwork.
  • Relay Programming
    A collaborative programming experiment designed to explore factored cognition in the context of solving Project Euler-style programming puzzles.

Thanks to Paul Christiano, Rohin Shah, Daniel Dewey, Owain Evans, Andrew Critch, William Saunders, Ozzie Gooen, Ryan Carey, Jeff Wu, and Jan Leike for feedback on the chapters.