AI Simplified

AI Simplified

Written by Dominique Engome Tchupo, Ph.D.
Senior Human Factors Researcher
Jumpseat Research

 

Introduction 

Artificial intelligence (AI) has been a hot topic of discussion over the past several years, with different tools becoming available to the public such as ChatGPT, DeepSeek, and Dall-E. However, the idea of humans creating machines that can mimic human thinking and decision-making is not new.  

The idea of AI itself goes back thousands of years with “automatons,” mentioned by ancient Greek philosophers and even Leonardo Da Vinci’s knight in the 1400s. Despite its long history in the minds of inventors and philosophers, the development of AI as we know it only starts in the 1900s. 

 

The Beginnings of AI 

Scientists really began questioning whether it was possible to build an artificial brain in the 1900s. This interest was largely bolstered by the wide range of media featuring artificial humans, such as “La Conspiration des Milliardaires,” “Ozma of Oz,” “Metropolis,” and “Automata.” In fact, the term “robot” was coined by Czech playwright, Karel Čapek, in his 1921 play “Rossum’s Universal Robots (R.U.R);” with Professor Makoto Nishimura building Gakutensoku, the first Japanese robot, in 1929. 

Even with the increased interest and research in robots and artificial humans, it was not until the 1950s that the term “artificial intelligence” was coined by Dr. John McCarthy. It was also around that time that the famous Turing Test was developed by Alan Turing. The Turing Test, initially called the Imitation Game, was a method used by experts to measure computer intelligence. This test consists of a human judge having a conversation (via message) with both a human and a computer. If the judge cannot reliably tell which is which, the computer is said to have passed the Turing test. There are no fixed questions established in the Turing test, so what the judge asks is up to their discretion. 

AI in Recent History 

In the time between the 1950s and now, research into AI saw both a boom and a winter, but some things worthy of note in the last two to three decades include:  

  • Deep Blue (created by IBM) beating the chess world champion in 1997, the first time a program beat a chess champion; 
  • NASA landing two rovers on Mars in 2003 that roamed the surface without human interference; 
  • Apple releasing Siri in 2011, the first popular virtual assistant; 
  • The Chinese tech group, Alibaba, releasing a language processing AI that beat human intellect on a Stanford reading and comprehension test in 2018; and 
  • OpenAI beta testing chatGPT-3 in 2020. 

These events represent just a few key highlights and don’t capture the full scope or depth of advancements in AI across disciplines. Even now, new models are being released that reshape our understanding of AI’s capabilities and applications. 

AI-related Terms and Definitions 

With AI having become a significant topic of public interest and discourse since the release of ChatGPT, several less familiar terms and acronyms have made an appearance. 

These terms are sometimes (incorrectly) used interchangeably and can be difficult to keep track of. Below are some terms and acronyms commonly associated with AI and what they mean. 

 

 

 

Artificial Intelligence (AI)  

These are programs that simulate human intelligence through algorithms. AI allows computers to do things that would normally require human reasoning and decision-making; like understanding natural language, self-correcting, and pattern recognition. AI can be broken down into three sub-categories: 

Artificial Narrow Intelligence (ANI): An AI that can only perform one task, like voice recognition. 

Artificial General Intelligence (AGI): An AI that’s capable of processing, adapting, and applying information to a broad range of tasks. AGI is still only a theoretical concept and none of the current AI tools can be applied as widely as AGI would be. 

Artificial Super Intelligence (ASI): Another theoretical category of AI which aims to have AI that surpasses human intelligence in almost every aspect. Like AGI, ASI remains speculative.  

Machine Learning (ML)  

A subset of AI, machine learning allows computers to learn and make decisions without being explicitly programmed to do so. Instead, with ML, the algorithm is trained by feeding it large amounts of data. It is through ML that artificial intelligence gets its “intelligence.” In fact, thanks to this, music streaming platforms can give music recommendations based on the type of music you’ve listened to. 

Deep Learning (DL) 

Deep learning is a subset of machine learning that deals with algorithms based on artificial neural networks (ANN), inspired by the structure of the human brain. Deep learning helps computers learn from experience and uses multiple layers of artificial neural networks (ANN) to come to decisions and process unstructured data like images and text.  

Neural Network (NN) or Artificial Neural Network (ANN) 

An artificial neural network is a type of algorithm modeled after the human brain, using processes to mimic the way biological neurons work to make decisions. ANNs are used for pattern recognition and forecasting in various fields, including medicine, data mining and telecommunication.  

Large Language Models (LLM) 

LLMs are a type of deep learning models that use neural networks and are pre-trained on very large amounts of data. LLMs give computers the ability to parse, analyze, and most distinctively, generate human language. Typically, LLMs return text based on the probability of the next word occurring. Thus, rather than understanding and reasoning, the large amount of data the LLM has been trained on is used to calculate the likelihood of certain words appearing in a particular order and writing out sentenes based on those results.  

One of the most famous software using LLMs is ChatGPT. It is because it is an LLM that it can process questions asked in plain language and respond in words, rather than numbers or code.  

Natural Language Processing (NLP) 

This is a subfield of computer science and artificial intelligence that uses machine learning to allow computers to understand and communicate with human language. The main distinction between LLMs and NLP is that while LLMs can generate human-like text but have limited text processing capabilities, NLP is primarily focused on language processing and analysis. It is through NLP that things like sentiment analysis and AI-generated translations (e.g., Google Translate) are possible. 

Conclusion 

Starting to use AI can feel overwhelming at first, but once you understand what types of AI are best suited for specific tasks, your options quickly become more manageable — making it much easier to jump in. 

The good news? With the way technology is evolving, you’ll likely find yourself using AI without even realizing it (in fact, you probably already are). 

While many buzzwords and acronyms get tossed around, they generally all fall under the larger umbrella of artificial intelligence. Although these subsets have differences, “AI” can often be used as a catch-all term unless the situation calls for more nuance.