Artificial intelligence (AI) has become a household name thanks to the rise of sophisticated AI chatbots and image generators. Yet, this field has a rich history that stretches back to the very beginnings of computing. As AI is poised to fundamentally alter how we live in the years to come, understanding its roots is crucial. Here’s a journey through 12 of the most significant milestones in the history of AI.
1950 — Alan Turing’s Seminal AI Paper
Renowned British computer scientist Alan Turing published a paper titled “Computing Machinery and Intelligence,” which was among the first detailed explorations of the question “Can machines think?” Addressing this question requires defining “machine” and “think.” Instead, Turing proposed a game: an observer would watch a conversation between a machine and a human, attempting to discern which was which. If the observer couldn’t reliably determine who was the machine, the machine would win. While this didn’t prove that a machine was “thinking,” the Turing Test, as it came to be known, has remained a vital benchmark for AI progress.
1956 — The Dartmouth Workshop
AI as a scientific discipline traces its origins back to the Dartmouth Summer Research Project on Artificial Intelligence, held at Dartmouth College in 1956. The participants were a who’s who of influential computer scientists, including John McCarthy, Marvin Minsky, and Claude Shannon. This marked the first time the term “artificial intelligence” was used as the group spent nearly two months discussing how machines could simulate learning and intelligence. The meeting sparked serious AI research and laid the foundation for many breakthroughs in the decades that followed.
1966 — The First AI Chatbot
MIT researcher Joseph Weizenbaum unveiled the first AI chatbot, known as ELIZA. The software was rudimentary, offering canned responses based on keywords detected in the prompts. However, when Weizenbaum programmed ELIZA to act as a psychotherapist, people were reportedly amazed by the convincing nature of the conversations. This work fueled growing interest in natural language processing, particularly from the U.S. Defense Advanced Research Projects Agency (DARPA), which provided substantial funding for early AI research.
1974-1980 — The First “AI Winter”
Early enthusiasm for AI waned quickly. The 1950s and 1960s were a fertile period for the field, but in their excitement, leading experts made bold claims about the capabilities of machines in the near future. The technology’s failure to meet these expectations led to growing discontent. A highly critical report on AI by British mathematician James Lighthill prompted the U.K. government to cut almost all funding for AI research. Around this time, DARPA also drastically reduced its funding, leading to what became known as the first “AI winter.”
1980 — A Surge of “Expert Systems”
Despite disillusionment in many quarters, AI research continued. By the start of the 1980s, the technology was attracting attention from the private sector. In 1980, researchers at Carnegie Mellon University built an AI system called R1 for the Digital Equipment Corporation. The program was an “expert system,” an approach to AI that researchers had been experimenting with since the 1960s. These systems used logical rules to reason through vast databases of specialized knowledge. The program saved the company millions of dollars annually, sparking a boom in industrial deployments of expert systems.
1986 — The Foundations of Deep Learning
Most research up to this point focused on “symbolic” AI, which relied on handcrafted logic and knowledge databases. However, since the inception of the field, there was a parallel stream of research into “connectionist” approaches inspired by the brain. This research continued quietly in the background and finally emerged in the 1980s. Instead of programming systems manually, these techniques involved training “artificial neural networks” to learn rules from data. This, in theory, would lead to more flexible AI that wasn’t constrained by the maker’s preconceptions, but training neural networks proved challenging. In 1986, Geoffrey Hinton, later dubbed one of the “godfathers of deep learning,” published a paper popularizing “backpropagation,” the training technique underpinning most AI systems today.
1987-1993 — The Second AI Winter
Following their experiences in the 1970s, Minsky and fellow AI researcher Roger Schank warned that AI hype had reached unsustainable levels and the field was at risk of another setback. They coined the term “AI winter” in a panel discussion at the 1984 meeting of the Association for the Advancement of Artificial Intelligence. Their warning proved accurate. By the late 1980s, the limitations of expert systems and their specialized AI hardware became apparent. Industry spending on AI plummeted, and most fledgling AI companies went bankrupt.
1997 — Deep Blue’s Victory Over Garry Kasparov
Despite repeated booms and busts, AI research made steady progress during the 1990s, largely out of the public eye. This changed in 1997 when Deep Blue, an expert system built by IBM, defeated chess world champion Garry Kasparov in a six-game series. Proficiency in the complex game of chess had long been considered a key marker of progress for AI researchers. Defeating the world’s best human player was a major milestone that made headlines around the world.
2012 — AlexNet Ushers in the Deep Learning Era
Despite a wealth of academic work, neural networks were deemed impractical for real-world applications. To be useful, they needed to have numerous layers of neurons, but implementing large networks on conventional computer hardware was inefficient. In 2012, Alex Krizhevsky, a doctoral student of Hinton, won the ImageNet computer vision competition by a wide margin with a deep learning model called AlexNet. The secret was the use of specialized chips called graphics processing units (GPUs), which could efficiently run much deeper networks. This paved the way for the deep learning revolution that has driven most AI advancements ever since.
2016 — AlphaGo’s Defeat of Lee Sedol
While AI had already surpassed chess, the far more complex Chinese board game Go remained a challenge. But in 2016, Google DeepMind’s AlphaGo beat Lee Sedol, one of the world’s greatest Go players, in a five-game series. Experts had anticipated this achievement to be years away, so the result fueled excitement about AI’s progress. This was partly due to the general-purpose nature of the algorithms underlying AlphaGo, which relied on an approach called “reinforcement learning.” In this technique, AI systems learn through trial and error. DeepMind later expanded and refined this approach to create AlphaZero, which can teach itself to play a variety of games.
2017 — The Invention of the Transformer Architecture
Despite significant progress in computer vision and game playing, deep learning was making slower progress in language tasks. Then, in 2017, Google researchers published a novel neural network architecture called a “transformer,” which could process vast amounts of data and establish connections between distant data points. This proved particularly useful for the complex task of language modeling and enabled the creation of AIs that could simultaneously handle a variety of tasks, including translation, text generation, and document summarization. All of today’s leading AI models rely on this architecture, including image generators like OpenAI’s DALL-E, as well as Google DeepMind’s revolutionary protein folding model AlphaFold 2.
2022 – Launch of ChatGPT
On November 30, 2022, OpenAI released a chatbot powered by its GPT-3 large language model. Known as “ChatGPT,” the tool became a global sensation, attracting over a million users in less than a week and 100 million by the following month. It was the first time the public could interact with the latest AI models, and most were astounded. The service is credited with triggering an AI boom, leading to billions of dollars invested in the field and spawning numerous imitators from major tech companies and startups. It has also sparked growing unease about the pace of AI development, prompting an open letter from prominent tech leaders calling for a pause in AI research to allow time to assess the implications of this technology.