OpenAI and containers for work
How does the structure of the organization affect its incentives?
Well, I was going to publish an AI landscape take today – turns out, it’s an eventful day for the AI landscape, with OpenAI’s cofounders leaving the company due to board action.
Instead, I want to talk about “containers for work” – something I’ve long thought about, now with the very timely catalyst of OpenAI’s unique corporate structure and board control.
The dominant structure for work in Silicon Valley is the corporation – the Delaware C Corp, to be precise. Work comes with a commercial mission. The common advice is to “build something people want” but the unspoken follow up is “and ensure it’s something that someone will pay for, because that’s the truest sign of their desire.”
We’ve seen a number of startups fail because their economic model didn’t match their product model, or because they didn’t have work that fit into the shape of a “product.” The smallest of companies are deeply affected by a short term need to commercialize very quickly: I’ve been at firms that had to pivot the product, time and time again, not because of usage numbers, but because of financials. This was especially relevant in a consumer healthcare startup I was working on in 2013 — patients wanted our product, and we were the number one app in the App Store, but the financial model just didn’t work with their inability to pay.
The only freedom from this approach can be found in the largest of companies – Google, ironically, can shelter people from an immediate demand for commercialization via its creation of the best business model of all time. It means you can send internet balloons into Peru as an experiment. It means you can offer services for free in the hopes that they’re subsidized via advertising. It also means that you can fund something like AI research, perhaps even better than all universities can.
It’s that last clause – the “better all universities can” that makes all of this so challenging. If technology companies were only one of many lucrative career options for people interested in progress, then the push to software commercialization wouldn’t really be a concern. People would simply choose the right container for the work they want to do. If they want to commercialize their product, they’d work at a fast growing software company. If they wanted to make something that is important, subject to complex incentive alignment issues, and not ready for productization, they’d perhaps work in government. If they wanted to do academic research, they’d work at a lab or a university.
When I was living in the UK, I went to a DeepMind friend’s Christmas party. The party was an odd mix of well off researchers at DeepMind, and their more threadbare counterparts who chose to remain academics at UK universities. One lovely Welsh professor seemed very optimistic that his threadbare status wouldn’t last very much longer. “I’m joining [for profit AI company] in San Francisco at the beginning of the year!” he said excitedly. “I’m so excited to actually have a compute budget. My research has been so limited until now.” I think he was discouraged by the academic rigmarole of tenure and prestige, disliked his limited budget for work, and felt like his smartest peers were leaving for industry. Academia is far from perfect, and I expect he did even better research in private industry. But just like his research in academia was shaped by local incentives and available grant funding resources, I expect that the incentives around the research environment in private enterprise shaped what he worked on in a surprising way.
I know that Sam Altman attempted to raise philanthropic funding for OpenAI’s necessary compute for years. I also know that it didn’t work: AGI needed a lot of compute, and in many ways Microsoft was the lesser of evils. Nevertheless, the incentives of a for-profit organization do change the very nature of the work. I don’t have all of the information necessary to evaluate the relative merits of their corporate governance structure, but I do know that its level of complexity reflects how difficult balancing incentives can be. The researchers who work on AI aren’t ignorant of the fact it could have really far reaching national security, economic equality, and safety concerns. To deny this is ignorant.
In another era, this would have been a big government project. I am curious why there was no government interest in spinning up a national AI laboratory like Los Alamos, or an AI agency like NASA. The anti-China push alone should have motivated some effort here. We funded cancer moonshots. We should have funded an AI laboratory that researched wealth redistribution mechanisms like UBI as well as AI advances. Sam Altman agrees with me, given that’s exactly what YC was up to at the time.
In 2021, U.S. government agencies, aside from the Department of Defense, allocated $1.5 billion for academic funding for AI research, sharded across all eligible universities. That’s the same amount Google spent on a single AI research project (DeepMind alone, not even Bard!) in a single year (2019). According to this paper in Science, “roughly 70% of individuals with a PhD in artificial intelligence get jobs in private industry today, compared with 20% two decades ago.” This isn’t true in molecular biology and other pharma feeder industries, where researchers stay in academia a little over half the time, and we see a roughly even split in discoveries between industry and academia.
Pharma creates a nice precedent for the split between basic science research that can happen in both private and academic labs, and the commercialization that is clearly a separate discipline. Putting them together is rarely useful (Xerox PARC lead to a very small number of inventions for Xerox itself) and can sometimes lead to really tortured resource allocation decisions — some of the big incentives and “who gets the compute” questions at OpenAI make that obvious. Who makes the call on what’s more important, especially given that it’d mean throttling research breakthroughs for the whole industry?
Commercial organizations have their benefits. I love the pace and the urgency. I love the feeling of being in the thick of things, trying to genuinely change the world, instead of writing a paper that maybe nobody will ever read. There’s a rush from the impact of your tools and there’s a thrill that comes from winning the deal. As a bloodthirsty competitive person, it’s my second favorite part of the job. But more than anything, the benefit of the commercial organization over the academic research firm is the money. If I want a relatively petit bourgeois life – to have a house in San Francisco or NYC, raise three children (like my own family), help my parents out, send children to good schools and music classes, pay for medical costs, and to have a reasonable social life (kids birthday parties require gifts!) – it’s going to cost about $300-700k per year. I need tech money to make that happen. It’s a scarcity mindset borne from growing up with not that much money, but it means that the tech boon of social mobility and purchasing power are especially meaningful to me. As a 16 year old, I picked a free university because it was what we could afford. But maybe a tech income could mean that my kids can go anywhere they want to.
I hope that tech explores more creative structures for work: more nonprofits, more research collectives, more tinkering studios. And I hope that it also explores more funding models: fewer VC backed companies hell bent on hypergrowth and more indiehacker businesses that are relatively solvent from the get go. But more than anything, I do hope that we don’t doom many public goods and inherently non-commercial efforts to commercial containers because the alternatives of government, nonprofits, and academia seem terrible. We’ve tortured organizational structures enough in our effort to re-invent the classics. Let corporations be corporations.