The history of Ai has been an exciting one, filled with important turning points and discoveries that have influenced modern technology. The idea of machine intelligence was first presented in Alan Turing’s seminal work in the 1950s, marking the beginning of AI history. John McCarthy’s 1956 coining of the phrase “artificial intelligence” came next, opening the door for more research and advancement. The history of artificial intelligence has experienced both intensive funding and development spurts over the years and times of decline known as AI winters.
History of AI
When computer scientist Alan Turing published the paper “Computing Machinery and Intelligence,” which questioned whether machines could think and how to assess a machine’s intelligence, the idea of artificial intelligence started to gain traction in the 1950s. The Turing test, a technique for evaluating machine intelligence, was originally proposed in this work, which also provided the foundation for AI research and development. Computer scientist John McCartchy first used the phrase “artificial intelligence” in a 1956 academic meeting at Dartmouth College.
Academic institutions and US government funding for AI research increased after McCarthy’s meeting and during the 1970s. During this period, advances in computers enabled the establishment of numerous AI foundations, such as machine learning, neural networks, and natural language processing. Even with their advancements, AI technology eventually proved more challenging to scale than anticipated, and interest and funding eventually fell as well. This led to the first AI winter, which lasted until the 1980s.
The introduction of AI-powered “expert systems,” the popularization of deep learning, and the advancement of computers all contributed to a resurgence of interest in AI in the mid-1980s. However, a second AI winter developed, lasting until the mid-1990s, as a result of the complexity of new systems and the incapacity of current technology to keep up.
By the middle of the 2000s, advances in large data, computing power, and sophisticated deep learning techniques have overcome the earlier obstacles to AI, opening the door to even more advancements. The 2010s saw the introduction of cutting-edge AI technologies that helped shape the field into what it is today, including generative AI, autonomous cars, and virtual assistants.
Artificial Intelligence Timeline
(1943) : The earliest mathematical model for creating a neural network is proposed in the 1943 publication “A Logical Calculus of Ideas Immanent in Nervous Activity” by Warren McCullough and Walter Pitts.
(1949) : Donald Hebb puts forth the hypothesis that brain pathways are formed by experiences and that connections between neurons are stronger the more often they are utilized in his book The Organization of Behavior: A Neuropsychological hypothesis. Hebbian learning is still a crucial AI paradigm.
(1950) : In his 1950 work “Computing Machinery and Intelligence,” Alan Turing presents the idea for what is now called the Turing Test—a technique for figuring out whether a machine is clever.
(1950) : saw the development of the first neural network computer, SNARC, by Harvard freshmen Marvin Minsky and Dean Edmonds.
(1956) : At the Dartmouth Summer Research Project on Artificial Intelligence, the term “artificial intelligence” is first used. The conference, which John McCarthy chaired, is largely credited with launching artificial intelligence.
(1958) : John McCarthy creates the artificial intelligence programming language Lisp in 1958 and publishes “Programs with Common Sense,” a work in which he suggests the imaginary Advice Taker, a full AI system that is as capable of learning from experience as people are.
(1959) : While working at IBM in 1959, Arthur Samuel first uses the phrase “machine learning.”
(1964) : As a doctorate candidate at MIT in 1964, Daniel Bobrow creates STUDENT, an early example of a natural language processing program used to answer mathematics word problems.
(1966) : One of the earliest chatbots, Eliza, was developed in 1966 by MIT professor Joseph Weizenbaum. Eliza was able to effectively replicate users’ speech habits and give the impression that it knew more than it actually did. This gave rise to the Eliza effect, a widespread phenomena in which people mistakenly believe AI systems to have human-like emotions and mental processes.
(1969) : The Stanford University AI Lab develops DENDRAL and MYCIN, the first two effective expert systems.
(1972) : The logic programming language PROLOG is developed in 1972.
(1973) : The British government releases The Lighthill Report, which highlights the shortcomings in AI research and results in significant funding reductions for AI initiatives.
(1974–1980) : DARPA significantly reduces academic funds due to dissatisfaction with the advancement of AI development. When combined with the preceding ALPAC study and the Lighthill study from the previous year, funding for AI dwindles and development stagnates. The “First AI Winter” is the term for this time frame.
In 1980, Digital Equipment Corporations creates the first commercially viable expert system, called R1 (sometimes referred to as XCON). R1 ends the first AI winter by launching an investment boom in expert systems that will last for the majority of the decade. It is designed to configure orders for new computer systems.
(1985) : Businesses invest more than $1 billion annually on expert systems, and the Lisp machine market as a whole develops to support them. Businesses that create customized machines that run the AI programming language Lisp include Symbolics and Lisp Machines Inc.
(1987–1993) : The “Second AI Winter” was ushered in in 1987 when the Lisp machine market crashed due to the emergence of less expensive competitors as computer technology advanced. During this time, expert systems became unpopular because they were too costly to update and maintain.
(1997) : World chess champion Gary Kasparov is defeated by IBM’s Deep Blue.
(2006) : Fei-Fei Li begins development on the ImageNet visual database in 2006; it was first released in 2009. This served as both the impetus and the foundation for the growth of image recognition in AI.
(2008) : Google develops speech recognition technology and adds it to its iPhone app.
(2011) : On Jeopardy!, IBM’s Watson easily wins against the opposition.
(2011) : saw the arrival of Siri, an AI-powered virtual assistant available on iOS devices from Apple.
(2012) : Ten million YouTube videos are fed into a neural network utilizing deep learning techniques as a training set by Andrew Ng, the creator of the Google Brain Deep Learning project. When a neural network discovered how to identify a cat without being taught what one is, deep learning funding and neural networks entered a new phase of development.
(2014) : saw the arrival of Amazon’s Alexa, a virtual smart home appliance.
(2016) : World Go champion Lee Sedol is defeated by Google DeepMind’s AlphaGo. One of the main obstacles for AI to overcome was the intricacy of an old Chinese game.
(2018) : Google unveils BERT, a natural language processing engine that lowers comprehension and translation hurdles for machine learning applications.
(2020) : During the early phases of the SARS-CoV-2 epidemic, scientific and medical teams working on a vaccine are able to use Baidu’s LinearFold AI algorithm. 120 times quicker than previous techniques, the system can forecast the virus’s RNA sequence in just 27 seconds.
(2020) : OpenAI publishes GPT-3, a natural language processing model that can generate text that mimics human speech and writing.
(2021) : OpenAI develops DALL-E, a program that can generate graphics from text prompts, by building on GPT-3.
The first draft of the AI Risk Management Framework, optional U.S. guidelines “to better manage risks to individuals, organizations, and society associated with artificial intelligence,” is released by the National Institute of Standards and Technology in 2022.
(2022) : OpenAI introduces ChatGPT, a chatbot with over 100 million users in a matter of months that is driven by a sizable language model.
In 2022, the White House presents the AI Bill of Rights, which lays down guidelines for the ethical creation and application of AI.
(2023) : Based on ChatGPT’s technology, Microsoft releases an AI-powered version of Bing, their search engine.
Google releases Bard, a rival conversational AI, in 2023. Later on, this would become Gemini.
In 2023, OpenAI introduces GPT-4, the most advanced language model to yet.
(2023) :The Executive Order on Safe, Secure, and Trustworthy AI is issued by the Biden-Harris government in 2023. It mandates safety testing, the labeling of information created by AI, and more attempts to establish worldwide standards for the creation and application of AI. The directive also emphasizes how crucial it is to make sure artificial intelligence isn’t used to violate consumer or civil rights, worsen discrimination, or get around privacy laws.
(2023) : Elon Musk’s AI startup xAI launches the chatbot Grok.
(2023) : In order to guarantee that AI systems used within the EU are “safe, transparent, traceable, non-discriminatory and environmentally friendly,” the European Union approves the Artificial Intelligence Act in 2024.
(2024) : Anthropic’s large language model, Claude 3 Opus, surpasses GPT-4, the first LLM to achieve this feat.