I’ve been thinking about “how to evaluate an early stage technology company.” As a potential employee or as an investor, it’s very hard to know if the bet that you’re going to take is worth it. On rare occasions, you might be an incredible expert on the founders, the technology, the market, and the potential future forces driving the market. But more often than not, you’re taking a bet with some amount of ambiguity and time constraint.
One tactic for answering very complicated and unknowable questions (e.g. “Will this early stage company become the next $70B platform?”) is to substitute an easier question for a hard question. Instead of bashing your brains out trying to evaluate the unknowable, ask yourself something that you can answer with reasonably obtainable data.
Some examples:
Instead of asking “will these founders scale to become public company CEOs?,” you can ask “do these people learn quickly and do they have a relevant “disproportionately good” talent?”
Instead of wondering “how will this market evolve?” you can ask, “is this market at least growing, even if small today?”
Instead of wondering “is this consumer product really going to monetize well?” you can ask “what are the current usage patterns like, and how does it slice by demographic?”
And of course, the classic – “will some big foundation model corp demolish this company?” and to that end, I’d ask something like “what’s the value they’re delivering to users today, and how good are they at maintaining the direct interface with users?”
Of course, this is very imperfect. Sometimes the easy questions are poor substitutes for the harder questions. As an example, investors notoriously over-rotate on founder pedigree in AI investing. They do this because it seems like knowledge of how to build models is a cornered and scarce resource. Lacking information on how the landscape is going to evolve, they made the bet that AI domain expertise would help a team react to changes more effectively. Many still believe this! But in reality, it feels like while getting AI researchers is great, the interface and application layers are super important, given that open source models are very good and that domain-specific data is still a useful asset for model performance.
The success rate of “substituting the easy question for the hard question1” will never be 100% – you’re intentionally minimizing complexity so that you can make a decision with the information that is easier to obtain.
However, you can increase the success rate by doing the following:
Can you phrase the original question more precisely to get at what you really want to know? Instead of asking a question like “How much would you contribute to save an endangered species?” (Daniel Kahneman’s question!), you could ask a question like, “How much would you donate to a Seed Bank of all of the endangered species of plants in the greater NY area?” Of course, a more specific question is easier to answer than a more general one!
Make the “easy question” a question about the first step, instead of thinking about the full set of steps. Instead of asking “can this team of founders take the company public”, which might be very difficult to know and anticipate at the series A (when you’re thinking of joining), you could ask the question that will be relevant within 4 years time. Something like “can this team take the company to $50M in revenue/a good Series C?” might work. You can look at real metrics and think about that question very logically.
I often get very bogged down in the intricacies of a hard problem. Most people who love solving complex puzzles and modeling are the same way. But even the best models are wrong. There’s diminishing returns in time spent trying to model an increasingly improbable chain of dependencies. We usually don’t have the time (offer deadline! investment deadline!) to spend! Substituting the easier question helps save time.
I often use this tactic with teams I work with when they seem bogged down. I might say something like, “Ok, you’re stuck. What’s the easy question you can answer now that feels related to the hard question?” When they tell me the question, I might probe into its pitfalls, “Where does the easy question minimize complexity? What does it flatten?” and then move on to either refining the question (“I think we can change the question [this way] to make it more faithful to the original, and we’ll check it at [this milestone.]”) or starting to answer it (“Ok, so let’s talk about all the data we need to answer that question?”)
I’ve found that substituting the easier question for the hard question is the best way to get going in our uncertain industry. I use it as a weapon (carefully!) when I’m in need.
How to identify “hard questions” and lots of other great content on this topic is in Daniel Kahneman’s Thinking Fast and Slow, a rare excellent pop-psych book (yes, a portion of it was discredited by the social science replication crisis, but that’s just the part on priming.) These are the “System 2” questions that require deep thought instead of gut reactions.