OpenAI’s Boardroom Drama Could Mess Up Your Future

In June I had a conversation with chief scientist Ilya Sutskever at OpenAI’s headquarters, as I reported WIRED’s October cover story. Among the topics we discussed was the unusual structure of the company. OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in […]

In June I had a conversation with chief scientist Ilya Sutskever at OpenAI’s headquarters, as I reported WIRED’s October cover story. Among the topics we discussed was the unusual structure of the company.

OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in a safe way. The company discovered a promising path in large language models that generate strikingly fluid text, but developing and implementing those models required huge amounts of computing infrastructure and mountains of cash. This led OpenAI to create a commercial entity to draw outside investors, and it netted a major partner: Microsoft. Virtually everyone in the company worked for this new for-profit arm. But limits were placed on the company’s commercial life. The profit delivered to investors was to be capped—for the first backers at 100 times what they put in—after which OpenAI would revert to a pure nonprofit. The whole shebang was governed by the original nonprofit’s board, which answered only to the goals of the original mission and maybe God.

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the Fast Forward newsletter that explores how advances in Al and other emerging technology are set to change our lives—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental
Exit mobile version