AI’s incredible path: Turing’s query to GPT-4o

In 1950, the mathematician Alan Turing asked a simple but important question: “Can machines think?” Turing went on to transform AI. Join us on an exciting journey to discover how AI has evolved from a theoretical concept to a cutting-edge technology that is changing the way we live.

What is artificial intelligence?

The field of computer science known as “artificial intelligence” aims to create programs that can solve problems and learn new information in a way that humans can. This is only possible by gathering and analysing a lot of data, learning from past data, and optimising and improving in the future. These systems need human intervention to correct errors and optimise operations.

The Formative Years of AI (1950–1956)

This time, highlights important turning points in the development of AI. Alan Turing’s papers, originally titled “Computer Machinery and Intelligence,” turned into The Turing Test, a technique for evaluating computer intelligence. The term “artificial intelligence” was first coined and became rather widespread at that time.

Between 1950 and 1956, there were significant advances in the development of artificial intelligence. Alan Turing’s study established the first approach for judging machine intelligence. 

In 1952, Arthur Samuel’s Checkers software became the first of its kind in machine learning. After John McCarthy’s Dartmouth workshop popularised the term in 1955, many people began to discuss and study “artificial intelligence”. As a result of these early successes, artificial intelligence (AI) has grown from a theoretical concept into a vibrant scientific discipline.

artificial intelligence term population

Building Smarter Machines: AI’s Critical Years (1957–1979)

Research into artificial intelligence (AI) went through a period of rapid expansion and challenging times between the coining of the term ‘artificial intelligence’ and the 1980s. The late 1950s and 1960s saw a creative surge. Artificial intelligence (AI) quickly became a popular concept, with programming languages that are still in use today and media that explored the concept of robots.

Similar advances were made in the 1970s, when an engineering student in Japan created the first human-like robot and the first autonomous car. But it was also a difficult time for AI research, with the US federal government uninterested in maintaining funding for such studies.

The following events in the field of artificial intelligence took place between 1957 and 1979:

  • John McCarthy developed LISP, the basic language for artificial intelligence (1958);
  • Arthur Samuel coined the term “machine learning” (1959);
  • General Motors introduced Unimate, the first industrial robot (1961);
  • Edward Feigenbaum and Joshua Lederberg created the first “expert system” to mimic human decision making (1965);
  • Joseph Weizenbaum developed ELIZA, an early natural language processing (NLP) chatbot (1966);
  • Alexey Ivakhnenko’s research laid the foundations for deep learning (1968);
  • James Lighthill’s AI funding research continued (1973);
  • The American Association for Artificial Intelligence was founded;
  • Stanford Cart demonstrated early autonomous navigation (1979);

AI Takes Off: Landmark Achievements (1980–1987)

The 1980s saw an increase in AI research and development, which is what we now refer to as the “AI boom.” Researchers were able to do this thanks to both new discoveries in the field and increased government funding. More people began to use expert systems and deep learning techniques, which gave computers the ability to learn and make judgements on their own.

Significant achievements were made during the AI boom, which lasted from 1980 to 1987. In 1980, the first AAAI conference was held, and XCON, the first commercial expert system, was introduced. 

Computer reasoning

In 1981, Japan committed $850 million to the Fifth Generation Computer Project, which aimed to develop computers that could reason like humans. Despite the AAAI’s warning of an “AI winter” in 1984, innovation continued. Following the release of Ernst Dickmann’s 55 mph driverless car in 1986, AARON, an autonomous drawing program, appeared in 1985. Alactrious Inc. released Alacrity, a strategic management guidance system with over 3,000 rules, in 1987, marking an iconic moment in the rapid growth of artificial intelligence.

The AI Ice Age: A Period of Pause (1987-1993)

The AAAI predicted the coming of an AI winter, defined as a period of low consumer, public, and corporate interest in AI, which leads to a decline in research funding and thus little progress. During this AI winter, the machine market and expert systems experienced setbacks such as the end of the Fifth Generation project, cuts to strategic computing initiatives, and a slowdown in the deployment of expert systems, private investors and the government lost interest in AI, preventing its funding due to high costs for seemingly low returns.

The years 1987-1993, often referred to as the “AI winter”  were a period of severe difficulties and declining prospects for the AI industry. After IBM and Apple introduced more user-friendly and cheaper alternatives to specialised LISP-based hardware in 1987, the market for such products collapsed. As a result of these new competitors’ ability to run LISP software without expensive and specialised hardware, some LISP companies went bankrupt. 

Despite these challenges, progress was made. Rollo Carpenter, a programmer, developed the Jabberwacky chatbot in 1988 with the goal of facilitating amusing and interesting interactions between users. This innovation highlighted how the AI community continues to thrive and innovate despite reduced support and interest.

AI Agents in Action: Breakthroughs (1993–2011)

Even with the lack of resources during the AI winter, the field made remarkable progress in AI research in the early 1990s, with notable achievements such as the introduction of a system capable of defeating a reigning world chess champion. Some of the inventions that brought AI into everyday life during this period were the first Roomba vacuum cleaner and the first commercially available speech recognition software for Windows computers.

AI Agents Evolution

A surge in research funding came after this. This increased interest led to even greater advances, that are listed below:

  • IBM’s Deep Blue beat Garry Kasparov (1997);
  • Dragon Systems introduced voice recognition for Windows (1997);
  • Cynthia Breazeal’s Kismet robot simulated human emotions (2000);
  • First Roomba vacuum cleaner debuted (2002);
  • NASA’s Mars missions, featuring autonomous navigation (2003);
  • Twitter, Facebook, and Netflix acknowledged the significance of AI (2006);
  • Microsoft’s Xbox 360 Kinect captured movements (2010);
  • Virtual assistants gained popularity when Watson of IBM and Siri of Apple won Jeopardy (2011);

The AGI Overview (2012–present)

In this timeframe, search engines, virtual assistants, and other popular AI technologies have grown in their applications. Big data and deep learning have also become popular.

Since 2012, artificial general intelligence (AGI) has developed significantly. Training a neural network to identify cats from unlabeled images allowed Google’s Jeff Dean and Andrew Ng to show success in unsupervised learning in 2012. Among those who criticized autonomous weapons in 2015, raising ethical questions regarding artificial intelligence, were Elon Musk and Stephen Hawking. Hanson Robotics presented Sophia, a sentient robot with actual facial expressions and emotional capacity, in 2016. 

In 2017, Facebook’s chatbots shocked everyone by developing their own language during negotiations. In 2018, Alibaba’s AI outperformed humans in a reading test at Stanford University, while in 2019, Google’s AlphaStar conquered StarCraft 2. In 2020, OpenAI’s GPT-3 produced remarkably accurate, human-like handwriting, while in 2021, OpenAI’s DALL-E improved AI’s ability to understand and characterise visual content. This groundbreaking work in AI is influencing the direction of technology and human interaction, and these milestones demonstrate how far we have come.

2020s: GPT-3` and Beyond

GPT models, for example, are advanced LLMs that use a transformer architecture to generate human-sounding text. These algorithms can produce convincingly natural-sounding and engaging content because they are trained on massive amounts of unlabeled text data. By 2023, these cutting-edge models will have become widely known as ‘GPT’ due to their continuous improvement. Each subsequent version of OpenAI’s GPT-n series outperforms its predecessor in terms of functionality, highlighting this progression. Since its release in March 2023, the latest model in the series, GPT-4, has continued to improve and extend the capabilities of its predecessors.

GPT-3 and Beyond

But there are other players in this space besides OpenAI. To meet their specific needs, other organisations have adopted and customised GPT technology. While some companies, such as Salesforce and Cerebras, have customised GPT for specific purposes, others, such as EleutherAI and Cerebras, have created their own versions of GPT models. Both Salesforce’s EinsteinGPT” and Bloomberg’s BloombergGPT” focus on improving CRM and serving the financial sector. The growing variety of GPT models demonstrates their increasing importance and adaptability across different sectors and shows how these innovative tools are transforming businesses and improving customer service.

Next-Gen Horizons: What’s Coming Down the Pipeline

The AI community introduced the GPT-4o transitional model on 20 July 2024. GPT-4o’s features include a reduction in hallucinations, an increase in computational efficiency, and support for a wide range of data types (text, audio, images, etc.). It will also have the bonus of improving memory in the long term. Rumours of a forthcoming open-source AI model suggest that OpenAI is still committed to innovation and making its models available, even if the company has moved towards private models.

Given the current advances and emerging trends in AI, what do you think will be the most transformative technology in the next decade? This is a question we could explore further in future posts.

Resources

  • Sha, A. (2023)). OpenAI GPT-5: Release Date, Features, AGI Rumors, Speculations, and More. Beebom. Retrieved June 7, 2023, from https://beebom.com/gpt-5/
  • https://www.tableau.com/data-insights/ai/history