Our Approach

Ought is somewhere between a research lab and a startup. We do conceptual and empirical research on supporting deliberation using machine learning, and share our findings openly. This research is guided by a concrete vision for tools that we hope will eventually help millions of people think through the questions and choices they face every day. We're incorporated as a non-profit, so we can afford to take the long view.

Conceptual research

The initial goal for our conceptual (or theoretical) research is to better understand the space of mechanisms for organizing cognitive work.

Right now, there exist a few proposals for organizing cognitive piecework in ways that lead to globally desirable outcomes. Meta-execution and dialog markets are probably the most notable instances, although there may be others. The two proposals differ in what they emphasize; taken literally, dialog markets seem to be a better fit for human input, whereas meta-execution provides a better basis for automation.

We want to better understand the space of such proposals, including how they relate to existing systems like Reddit and StackOverflow, build a taxonomy, and discover potentially useful variations. This will inform what kinds of systems we build in our empirical work.

Empirical research

The goal of our empirical research is to test mechanisms for automating and distributing deliberation that might or might not turn out to be useful, and to understand particular aspects thereof. Here are two examples:

  • To what extent can we decompose big, complicated questions into smaller pieces that can be solved relatively independently? And to what extent do individual thinkers need to know the big picture to be helpful? (Read more about our project on Factored Cognition.)
  • Can we use supervised learning and collaborative filtering to predict how people judge particular cognitive contributions (such as comments on a web forum) from cheap, noisy signals (such as upvotes and downvotes)? How much data about high-quality judgments do we need? (Read more about our project on Predicting Slow Judgments.)


In the long run, we hope that our research provides the basis for an end-user app that will help millions of people think through questions they care about much more deeply than they can now, and thus make better decisions. In the short term, we hope that applied work will create pressure on our research to stay relevant to the real world.

As a first step towards a useful application, we'll build a "pipeline" that funnels a constant stream of people asking questions to a rudimentary web app. This app will help people think through their questions using dialog. We'll limit the app to a few topics, and it will mostly just connect users with human experts, with little automation or other sophisticated support structures in place.

Over time, we'll incorporate advances from our research into this app. We'll carefully consider how to make this system scalable so that better ML algorithms can be plugged in as modular components and improve the system's performance. We believe that having an application that is used by real people and that employs state-of-the-art machine learning will keep us honest and our research relevant. In the medium term, we hope to generate substantial benefits for our users.

Eventually, this app might turn into a platform that incentivizes other researchers and companies to work on algorithms that help people think. We'll focus on understanding and operationalizing what it means for actions on this platform to be aligned with a person's interests, so that we can build a market that rewards participants who build bots that generate aligned actions in the context of helping people solve problems and make decisions. (Read more about dialog markets.)