This month, out of the blue, the Biden administration sanctioned China’s semiconductor industry. It banned the sale of semiconductors to China and US citizens working with Chinese companies. 

Gregory Allen at the Center for Strategic and International Studies says the export controls have a specific purpose: they’re designed to curtail China’s access to artificial intelligence (AI). AI relies on specific types of computer chips, called GPUs, that can be harnessed together by the thousand. These, says Allen, are the chips Biden specifically targeted:

“The chip must be both a very powerful parallel processor (300 tera operations per second or higher) and have a very fast interconnect speed (600 gigabytes per second or higher). By only targeting chips with very high interconnect speeds, the White House is attempting to limit the controls to chips that are designed to be networked together in the data centers or supercomputing facilities that train and run large AI models.”

Why is the US doing this all of a sudden? While our attention was on the war in Ukraine this year, something dramatic has been happening in the world of AI. In a couple of months, there were multiple breakthroughs that had been thought to be years away. 

The breakthroughs are one thing, but just as surprising are the teams that made them. AI has up to now been dominated by giants like Alphabet and Meta. Those companies spent billions on the best AI researchers and cutting-edge hardware, and trained their fire on the world’s richest data sets. 

But what happened this summer is that a relatively small organisation, OpenAI, made big breakthroughs. It was quickly followed by other small AI organisations, Midjourney and Stable Diffusion. The old idea — that billions of dollars and proprietary datasets are needed to build an AI — has been disproven. 

What are the breakthroughs? The first one was language processing. OpenAI’s GPT-3 is one example. It can be used for all sorts of language tasks. Here’s a worrying snippet of a conversation with it — GPT-3’s answers are in green.

Next, AI researchers started building language processing models. They learned that AI models built using the language processing architecture had a useful feature: they didn’t require perfectly organised, pristine datasets in order to work. They could be trained on messy data. That is how start-ups like Stable Diffusion and OpenAI, without access to proprietary datasets, were able to compete. 

The next big breakthrough was in AI-based images. The new models can come up with whatever image you ask them for. I asked OpenAI’s Dall-E 2 for a picture of a robotic hand drawing another robotic hand. Here’s what it gave me:

And, unbeknown to judges, an AI artwork won first prize at the Colorado State Fair this year:

Limits of non-profits

OpenAI is one of the AI start-ups leading the way. Sam Altman is its co-founder and CEO of OpenAI. After selling his first company for $43 million at the age of 25, he made his bones as President of the famous start-up accelerator Y Combinator. 

He founded OpenAI as a non-profit in 2015. Here’s how he introduced it:

“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

In 2019, Altman and OpenAI ditched the non-profit model. They invited in private investors (albeit with returns limited to 100x). All value above 100x is retained by the OpenAI non-profit. Here’s what Altman said at the time: 

“We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI. We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers…”

Non-profits are good and noble, but I think the story of OpenAI shows their limits. 

Non-profits are a good way of coordinating a group of people who are inspired by a mission. Profits, once introduced into the equation, have a way of dominating all other considerations. Removing profits lets the organisation focus solely on its mission. 

But the number of people inspired by a mission is necessarily going to be pretty small. That means non-profits are a good way of achieving relatively simple goals, or goals that are purely humanitarian. 

But non-profits don’t work so well for complex goals. One reason is that complex goals require input from lots of people. Whatever a non-profit’s mission, it’s not likely to inspire large numbers of people. Money is a helpful way to get lots of people on board. Take OpenAI — it needs well-paid AI researchers, along with access to billions’ worth of computing power. 

Another reason is that non-profits are harder to manage. A bottom line is a useful tool for organising big groups of people. It is a simple, easily communicable goal: “do whatever makes that number bigger”. In non-profits I’ve seen, there is a lot of debate and internal politicking over exactly how the goal should be defined. And further debate over how to achieve those goals.

My point is not to trash non-profits, but to say that they don’t scale well. Above a certain size, and a certain complexity, it’s difficult to manage a big organisation without the clarity, and lowest-common-denominator incentive, that money provides. 

Private companies aren’t just a tool for making shareholders rich. As OpenAI shows, they’re a tool for coordinating large numbers of strangers to do hard things on behalf of society. 

P.s. There are other models for doing hard things. The Manhattan Project and the Space Race and the Five Year Plans show it’s possible for governments to get things done, without the need for private investors and profits. But I would contend wartime projects that gobble up a big slice of a country’s GDP are not good comparisons.