This article is part of a Fortune Special Report on Artificial Intelligence.
Last July, Satya Nadella, the CEO of Microsoft, one of the world’s most valuable companies, with a market capitalization hovering above $1 trillion, filmed a short video with Sam Altman, the 34-year-old entrepreneur best known for his stint running Silicon Valley’s preeminent startup accelerator, Y Combinator.
The setup for the three-minute segment, which Nadella posted to his LinkedIn profile, bore an uncanny resemblance to an episode of the online video series Between Two Ferns, which features comedian Zach Galifianakis conducting intentionally awkward interviews with A-list guests like Barack Obama and actress Charlize Theron. In this version, Nadella and Altman were positioned in modest-size chairs against a black background. Between them was a small coffee table with two glasses of water and a tiny plant perched on top. The 52-year-old Nadella played the role of host, posing questions to Altman while holding a notecard in his hands.
But Nadella wasn’t going for laughs. Rather, the purpose of the video was to discuss a major milestone for both executives: Microsoft’s $1 billion investment in the San Francisco–based startup Altman currently runs, OpenAI.
“So, our mission is to develop artificial general intelligence, broad A.I. systems that can do a lot of tasks at superhuman level,” Altman explained to Nadella. “I think that this will be the most important technological development in human history. When we have computers that can really think and learn, that’s going to be transformative.”
With Microsoft’s headline-grabbing investment in OpenAI, Nadella signaled his company’s commitment to that mission. And from a strategic point of view, he officially entered Microsoft in a technological arms race against Alphabet, Google’s parent company, and a handful of others competing to develop technology that could radically reshape the business world. The outcome of the race could well determine whether Microsoft, Alphabet, or someone else is the world’s most valuable company in 20 years’ time.
Nadella’s decision to invest in OpenAI was also a subtle acknowledgment that his company’s own internal efforts to stay at the bleeding edge of A.I. technology were falling short. Microsoft needed to catch up.
“This is about capturing the next great pool of wealth in technology,” says Craig Le Clair, an analyst at Forrester Research, the tech analytics firm. He compares A.I. to electricity in its potential impact. Sundar Pichai, Nadella’s rival CEO at Alphabet, has gone further, calling A.I. the most important project humanity would ever work on, “more profound than fire.”
Imagine being able to monetize the invention of fire. Now imagine missing out on the chance to monetize fire.
OpenAI was founded in 2015 by, among others, Altman and Elon Musk, the billionaire founder of Tesla. While OpenAI’s goal is to develop artificial general intelligence (AGI), the company says it is dedicated to ensuring the technology is developed in a way that “benefits all of humanity.” For that reason, OpenAI was initially set up as a nonprofit corporation. Last year, however, the company established a for-profit arm, the entity in which Microsoft invested. The terms of the deal make Microsoft the preferred partner for OpenAI to commercialize any technology it develops and chooses to license on the path to AGI.
The term “general” in the name artificial general intelligence is meant to differentiate it from more prosaic “narrow” artificial intelligence. It is narrow A.I. that in recent years has brought us breakthrough tech such as Alexa and Siri; the ability to unlock your iPhone with your face; and Facebook’s auto-tagging of your friends in the photos you upload. Narrow A.I. systems also route Amazon orders to your home and decide which agent handles your customer service call to your bank.
The same underlying breakthroughs in algorithms, data science, and computing are responsible for much of the current excitement about both types of A.I. But the two are distinct in their capabilities, and at the moment only narrow A.I. actually exists. AGI is a purely theoretical technology.
Narrow A.I. is often compared to an idiot savant—it’s good only at a specific skill, like recognizing speech or identifying faces, and today it requires many thousands or millions of examples to learn that skill well. Even so, these systems are incredibly valuable—and are only getting more so. The McKinsey Global Institute estimates that the application of narrow A.I. will add some $13 trillion to the global economy by 2030, an amount that it says would make the technology more impactful than the steam engine was in the 1800s.
But AGI would be many times more valuable still. AGI is the A.I. of Hollywood and sci-fi paperbacks. If it ever happens, it would make all the technological wonders of today’s narrow A.I. look as quaint as Stone Age ax heads. AGI promises a single piece of software capable of learning almost any task at human or superhuman level—a system that can master new skills quickly, perhaps by watching a single demonstration or just by reading, with no training at all, and maybe entirely at its own initiative.
Imagine that rather than assign a 15-person task force to decide where your company should build a new factory, you simply ask your company’s AGI. The system would immediately begin researching decision factors: proximity to suppliers and customers, transportation links, land-acquisition costs, local labor markets, tax incentives, etc. It would make a recommendation and produce a report explaining its reasoning. And it would do all this in minutes, not the weeks or months it would take the human task force. Then, if management agreed, it would instantly generate all the relevant work orders to start the process.
It is impossible to overestimate how valuable such a system would be to Microsoft or any other company that developed it. (It might also pose an existential threat to highly paid advisers, such as the McKinseys of the world.) OpenAI has capped the return its initial financial backers can earn at 100 times their investment, with the rest of the money flowing to the organization’s nonprofit. (Microsoft and OpenAI won’t disclose the exact cap Microsoft has agreed to.) Of course, as with nuclear power, such superintelligence might also be dangerous, as Musk himself has famously warned.
AGI has long been fodder for novelists, filmmakers, philosophers, and futurists. It has been the implicit, and at times explicit, goal of an entire branch of computer science, at least since the 1950s. But AGI was always a research project. It was never a business plan—until now.
Big Tech has begun spending big bucks in its quest for AGI. Microsoft and Alphabet each sponsor not one but two separate R&D entities largely dedicated to developing advanced A.I. Facebook has invested in a blue-sky A.I. lab. Chinese search giant Baidu has one too. And smaller labs exist at Uber, Salesforce, and others. Investment in AGI is forecast to reach $50 billion by 2023, according to a report from Seattle-based research firm Mind Commerce.
This investment has come despite the view of many computer scientists that AGI is still, at best, decades away. But for the world’s biggest technology companies, AGI is a race they can’t afford to lose—even if it turns out no one ever wins. “It’s about enhancing the perception of being a technology leader and innovator and being at the forefront,” says David Smith, an analyst for emerging technologies at research firm Gartner. That perception helps sell cloud-computing services and recruit engineering talent. But AGI isn’t merely about playing defense—research toward AGI feeds progress in narrow A.I. “The thing about A.I. is when you work to push forward the research, the downstream applications are incredible,” says Mark Cuban, the billionaire tech entrepreneur and owner of the Dallas Mavericks, who has invested in a handful of A.I. startups.
On a summer’s evening in 2015, Altman, who was running Y Combinator at the time, invited Musk to dinner at the Rosewood Sand Hill hotel, in the heart of Silicon Valley. The hotel, a luxurious stone ranch offering views of the foothills of the Santa Cruz Mountains, was a comfortable enough spot from which to contemplate Armageddon—and how to potentially stop it.
Musk’s views about the dangers of AGI had been informed by his involvement as an early investor in an unusual London-based startup called DeepMind. Founded in 2010, the company is led by Demis Hassabis, a former chess prodigy turned video game entrepreneur, who has an undergraduate degree in computer science and a Ph.D. in cognitive neuroscience. His intuition is that by drawing inspiration from the way the human brain works, DeepMind can achieve AGI. DeepMind’s mission statement is so audacious it borders on the ridiculous: “to solve intelligence, and then use that to solve everything else.”
It began to look less ridiculous though, in January 2013, when DeepMind stunned computer scientists by debuting an algorithm that had taught itself to play seven different classic Atari video games, such as Pong, Space Invaders, and Breakout, achieving superhuman performance in three of them. The breakthrough resounded through Silicon Valley like the crack of a starter’s pistol: The race for AGI was on, and the Valley’s digital giants were desperate to get in on it.
In 2014, Google, which already had its own advanced A.I. research lab called Google Brain, acquired DeepMind for a reported $650 million, a massive sum for a company that didn’t have a single product or a dollar of revenue. Meanwhile, Facebook, which had also been in the hunt to buy DeepMind, established its own advanced artificial intelligence research lab headed by Yann LeCun, one of the field’s top researchers.
DeepMind’s acquisition alarmed Musk, even though he made money from it. Shortly after the deal was announced, he wrote a blog post warning, “The pace of progress in artificial intelligence (I’m not referring to narrow A.I.) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most … This is not a case of crying wolf.”
Musk is friends with Google cofounder Larry Page. But he told journalists he feared Page’s company might succeed in creating superhuman intelligence, and then lose control of it. Even if that didn’t happen, Musk said he worried about a single corporation controlling such a powerful technology.
At dinner, Altman introduced Musk to a 29-year-old computer scientist named Ilya Sutskever, who was at the time working at Google Brain. Despite his age, Sutskever was already a legend among A.I. researchers: In 2012, A.I. software he helped create achieved an unprecedented score on ImageNet, a test that assesses an A.I.’s ability to identify pictures of 1,000 different types of objects. Also at the dinner was Greg Brockman, a 26-year-old coding whiz who had recently left his job as the chief technology officer at the payments processing unicorn Stripe. Together, they were hoping to secure Musk’s backing for a new kind of A.I. organization, one dedicated to open research and free from control by any single corporation.
The dinner led to the formation later that year of OpenAI. Musk joined its board and was listed as a cofounder. A set of initial donors—including, among others, Musk, Altman, Brockman, and Musk’s fellow PayPal alums billionaire Peter Thiel and LinkedIn cofounder Reid Hoffman—pledged $1 billion to support the research group. Sutskever came on board as OpenAI’s chief scientist.
Shortly after he was appointed CEO of Microsoft in 2014, Nadella moved to reposition his company around artificial intelligence. Nadella declared that all of Microsoft’s products and services would be “infused with A.I.” and called A.I. one of three fundamental technologies that will shape the future (the other two being “mixed reality” and quantum computing). The CEO saw enormous potential for A.I. across Microsoft, including in the office productivity software and cloud-computing services that, together, made up two-thirds of Microsoft’s revenues. It was not ground Nadella wanted to cede to Google or other tech rivals.
Microsoft had a long-standing research organization, with labs around the world, dedicated to state-of-the-art technologies, from virtual reality to cybersecurity, and yes, to A.I. But, as a company, Microsoft had mostly been interested in “augmenting human intelligence”—in other words, narrow A.I. Microsoft’s labs hadn’t produced the kind of flashy breakthroughs that DeepMind and Google Brain had. The company sometimes gave the impression AGI was a quixotic quest not worth pursuing.
But sitting out the AGI race presented Microsoft with a problem. The buzz around a series of breakthroughs at DeepMind and Google Brain created a perception that Alphabet was also leading the pack in narrow A.I. applications—giving Alphabet an edge in hiring the best researchers out of academia and potentially in selling cloud services too. This perception was further cemented in 2016 when DeepMind’s A.I. algorithm AlphaGo defeated the world’s best player in the ancient strategy game Go. Most A.I. researchers had thought it would be at least another decade before a system could conceivably equal humans at the game, which has exponentially more possible move combinations than chess. (Or Pong.) “The acquisition of DeepMind was the best marketing spend Google ever made,” says Chris Nicholson, the cofounder and CEO of Pathmind, a San Francisco company that helps businesses implement A.I.
Nadella had to do something to boost Microsoft’s A.I. bona fides. In 2016 he restructured the company’s research efforts, establishing a separate organization focused solely on A.I. research and applications of A.I. in Microsoft products like its Bing search engine and Cortana digital assistant. The CEO also began convening a weekly meeting of his top executives to discuss progress on the company’s A.I.-related projects. But those were incremental changes. Microsoft still lacked an A.I. moonshot.
Operating out of a three-story, gray-sided, pre-earthquake building with loft-like interiors in San Francisco’s Mission District, OpenAI now employs 120 researchers. In the past year, the group has made a series of announcements around “grand challenges,” designed to showcase its own progress toward AGI—and raise its public profile.
It created a team of five A.I. software bots capable of playing together in the video game Dota 2, which is frequently used in professional e-sports tournaments. OpenAI’s five bots defeated the reigning champion human team in a best-of-three demo match in San Francisco in April.
Separately, OpenAI revealed a language algorithm capable of taking a few human-written sentences and then riffing on them, generating several paragraphs of coherent prose—a significant leap forward in the field of natural-language processing.
Finally, in October, the company debuted a human-like robotic hand that could unscramble a Rubik’s cube. Roboticists have struggled for years to mimic the dexterity of a human hand. OpenAI’s hand learned the task well enough that, more often than not, it could unscramble the cube without dropping it.
The three announcements show a lot about how OpenAI thinks it can get to AGI—and also why the organization has become a lightning rod for criticism.
But first, some background: The current push for AGI, along with the rest of the current A.I. boom, is built on neural networks—a kind of software loosely based on the human brain. Arrayed in multiple layers, these artificial neurons convert some raw input, such as the pixels in an image, to some output, such as the label “cat.” Because of the many intermediate layers of artificial neurons these networks rely on, they are said to be “deep,” and using them to perform a task is called deep learning.
Current research into AGI largely divides into two camps: those who believe deep learning alone will be sufficient to achieve AGI vs. those who think it will have to be combined with something else, such as logical rules. Within the deep learning camp, there are further divisions: one tribe emphasizes algorithmic innovation. The other puts more focus on the sheer size of the neural networks they are building and the amount of data they are fed. OpenAI is firmly in the “size matters” society.
OpenAI’s signature achievements have all involved huge models, consuming massive amounts of computing power. Each of its five Dota2 bots, for example, was controlled by an algorithm that took in 159 million different parameters, or data variables. Over their 10 months of training, the bots racked up the equivalent of 45,000 human-years’ worth of Dota2-playing experience.
Altman told Nadella during their video chat in July that “increasing the size of the largest models we can train, keeps letting us solve seemingly impossible tasks.” Altman, Brockman, and Sutskever have all said building ever-larger neural networks is an important avenue to explore for making progress toward AGI.
Few researchers outside the company agree with OpenAI’s thesis, however. Gary Marcus, an emeritus professor of psychology and neuroscience at New York University who is now CEO of startup Robust AI, says there is no evidence larger neural networks will suddenly begin to exhibit human-like skills, such as commonsense reasoning or conceptual thinking. “This is to ascribe to deep learning the quality of magic,” he says. He says OpenAI has failed to show its systems can build representations of the world. “If you can’t do that, you’re not going to get to general intelligence,” he says. Ben Recht, a computer scientist at the University of California at Berkeley, is even more scathing in his assessment of OpenAI’s approach. “Have these guys never heard of the law of diminishing returns?” he says.
Another major criticism of OpenAI is that, desperate for attention, it has unreasonably hyped its accomplishments. When it announced its language algorithm, GPT-2, OpenAI told journalists it was withholding publication of the most powerful version of the software out of concern it could be abused to create fake news and disinformation campaigns. But a number of computer scientists accused OpenAI of exaggerating the risk in order to garner publicity. (Nine months later, the company did, in fact, release the full-scale model, saying it had seen little evidence that the less powerful versions it had made public had been abused.)
Zachary Lipton, a professor at Carnegie Mellon University who has become a vocal critic of OpenAI, accuses the company of doing research that is largely similar to others’ in the field but engaging in “aggressive press manipulation” in order to raise money. “They need to maintain the illusion at all times that something world historic is taking place and that they are at the center of it,” he says. This results in marketing that Lipton says is “unethical and irresponsible.”
In an emailed response, OpenAI spokesperson Ashley Pilipiszyn denied the company engages in manipulative marketing practices and says that the firm should be judged “by the impact of our work,” and “not by what we (or anyone else) says.”
There’s one thing about OpenAI’s approach that’s not in dispute: It is expensive. Bigger models require more computing power—which OpenAI has to lease from a cloud service provider. Top A.I. researchers, meanwhile, command six- and sometimes seven-figure salaries. While OpenAI hasn’t revealed its burn rate, its rival DeepMind, which now employs about 900 people, racked up $746 million in administrative expenses, including staff and computing costs, in 2018 alone, according to U.K. financial filings.
“The amount of money we needed to be successful in the mission is much more gigantic than I originally thought,” Altman told Wired magazine last year. Complicating matters, OpenAI lost one of its biggest supporters: Musk. The billionaire stepped down from OpenAI’s board in early 2018, citing the demands of running Tesla and SpaceX and conflicts of interest as Tesla increasingly moved into A.I. and sought to recruit the same researchers as OpenAI.
In need of more cash, OpenAI’s board decided to radically transform its structure: In March, Altman announced the creation of OpenAI’s for-profit arm. The new structure allows OpenAI to take on venture investment. Crucially, it also gives the group the ability to issue stock options to attract and retain top computer scientists. Reid Hoffman’s charitable foundation and Khosla Ventures, a prominent Silicon Valley venture capital firm, became the for-profit’s first investors, injecting unspecified amounts. Then, in July, Microsoft put in its $1 billion.
Some of that cash will come back to Microsoft as OpenAI buys data center time from the company’s cloud-computing arm, Azure, which it has agreed to use exclusively. What the investment does not do is give Microsoft ownership rights to AGI—if OpenAI is successful in developing it. That will remain the property of the nonprofit part of OpenAI, which has also retained voting control of its for-profit wing. (If that makes you wonder if Microsoft’s investment is really about AGI after all, you aren’t alone.)
Both Microsoft and OpenAI declined to allow their executives to be interviewed about their partnership for this story. But a look at how Google has benefited so far from Google Brain and DeepMind provides a glimpse of what Microsoft stands to gain, even if it never gets its hands on AGI.
Algorithms that Brain has developed have helped improve Google’s search engine, Google Translate, Google Maps, and its cloud-computing infrastructure. “Those kinds of things are really valuable for the company,” says Jeff Dean, the senior software engineer who helped found Google Brain and now heads all A.I. research at Google.
DeepMind, meanwhile, has an entire group called DeepMind for Google (or DMG for short), responsible for collaborating with its sister company and other parts of Alphabet. “We don’t choose product problems and then work out how to fix them,” says Koray Kavukcuoglu, DeepMind vice president of research. But if DeepMind’s research happens to be useful to a problem another Alphabet company is working on, DMG will often collaborate on a solution. In 2016, DeepMind said it had helped Google figure out a better way to manage the cooling systems in the company’s data centers, reducing its cooling bill by 40%. It later used a version of this algorithm to help extend the battery life of Android phones. In 2017, a DeepMind algorithm became the engine behind the computer-generated voice of Google’s digital assistant.
The biggest question about the corporate race for AGI is whether the large technology companies funding it actually believe—or even care—if creating human-like or superhuman intelligence is possible. “Within Silicon Valley, AGI is a kind of religious argument,” says Pathmind’s Nicholson. “You’re either a believer, or you’re not.”
Josh Tenenbaum, a professor of computational cognitive science at MIT, is a believer: He runs a lab focused on reverse-engineering human intelligence and building more human-like A.I. But, like many in the field, Tenenbaum thinks AGI is “very far away.” And he is among those who think the big corporations competing in the race for AGI are not being fully transparent about their motivations.
While there are certainly researchers at DeepMind, Google Brain, and OpenAI who genuinely want to achieve AGI, companies such as Alphabet or Microsoft, in Tenenbaum’s view, care mostly about advances in narrow A.I. They’re focused on getting better tools for building a range of narrow systems—such as algorithms to spot credit card fraud or recognize faces or parse legal documents. These narrow systems can be used internally as well as sold to cloud-computing customers.
Microsoft’s partnership with OpenAI certainly has the potential to yield such innovations. The two companies have committed to helping Azure build better supercomputing capabilities, including the development of new chips designed to make the training and running of A.I. systems more efficient.
Whatever Nadella’s true goals may be for the OpenAI deal, by making a 10-figure investment the CEO has put down a marker in the world of A.I. research. His company has joined the race for AGI. Even if Microsoft doesn’t win, it may be the best $1 billion investment he ever made.
Shall we play a game?
Games have long been used as mile markers in the evolution of A.I. because they present intellectual challenges in a simplified setting. Here are a few notable wins for the computers:
Expanding on its previous work with the game system, DeepMind demonstrates an A.I. capable of mastering 49 Atari games—from Pong to Space Invaders— many to superhuman ability with just a few hours of training.
Libratus, a poker-playing A.I. created by researchers at Carnegie Mellon University, defeated four pros in no-limit Texas Hold’em.
2019: Starcraft 2
DeepMind’s AlphaStar A.I. ranks in the top 99.8% of the world’s players in the complex, real-time strategy video game Starcraft 2, showing a mastery of both long-term strategy and arcade-style tactical battles.
A version of this article appears in the February 2020 issue of Fortune with the headline “The Quest for Human-Level A.I.”
More from Fortune’s special report on A.I.:
—A.I. breakthroughs in natural-language processing are big for business
—Facebook wants better A.I. tools. But superintelligent systems? Not so much.
—A.I. in China: TikTok is just the beginning
—A.I. is transforming HR departments. Is that a good thing?
—Medicine by machine: Is A.I. the cure for the world’s ailing drug industry?
Subscribe to Eye on A.I., Fortune’s newsletter covering artificial intelligence and business.