logo-mark

Cookie Settings

We use cookies to operate this website, improve usability, personalize your experience and improve our marketing. Your privacy is important to us. Privacy Policy.

Researchbreaker

February 7, 2024

breaker

5 min read

February 7, 20245 min read

Artificial Intelligence Terminology

Understanding some of the key terms related to Artificial Intelligence

header-image

Share

Artificial Intelligence is a rapidly evolving field that is transforming the way businesses operate. This umbrella term of “AI” actually refers to a wide range of systems with different functionalities that have distinct characteristics and capabilities. This list is by no means exhaustive nor can it predict new developments that will come, but the following terms are ones that you may come across most often and can sometimes be used incorrectly or misunderstood. At the end of this post, we have some other glossary links we’ve found so any readers can bookmark this and come back when looking for clarity.

Algorithm

As a term, this is used a bit like a catch-all for any reference to programming. It can describe a lot of different things in a general sense. It is most truly a set of specific rules and commands defined by a human user (or group of human users). A typical example is a collection of If…Then… statements. IF the environment is in state X, THEN take action A. This collection of statements can be a simple list that runs something like a thermostat to a much larger collection in a RoombaTM robotic vacuum cleaner for complex decision-making. The key here is that the human(s) already established all possible actions based on all possible environments, at least all the possible environments that human(s) considered. It’s not so much an independent decision-making tool as it is an action-taking tool based on pre-made human decisions.

Model

You may hear this term mentioned in the context that an AI is building or creating a ‘model’ or ‘replica’ or ‘twin’ of some physical/mechanical system. Phaidra defines this as a mathematical representation of a particular component in a system, where data transformation occurs. As an example, x+y=z → in this we know the relationship between two items within a component, ‘x’ and ‘y’ produce a known outcome of ‘z’. This model then allows a simple prediction to occur that when x+y occurs, z will be the output. Of course, models can have much more complex mathematics incorporated. As an example, the thermodynamic relationship is reflected mathematically between a mechanical pump and the flow or pressure of liquid through it. Models for AI are mostly derived from observed data and not hard-coded like a design-based model.

Digital Twin

This is a mathematical representation of an entire system (within a defined boundary), or rather a combination of many interrelated models. This combination of many models that have interdependencies or unique relationships within a complex physical system can represent that system virtually. A digital twin can output accurate predictions of a future state of that system based on the unique inputs provided. This is also referred to as a ‘Full Simulation.’

Artificial Neural Network

This refers to a complex model or combination of many complex models that require large amounts of high-quality data to properly configure the relationships between all the models (or ‘components’). This is an effective means of capturing system dynamics in an industrial process, as an example.

Rule-Based Systems

These are also known as expert systems that operate on a set of predefined rules. These rules are created by experts in the field and are used to make decisions based on input data. This is widely used in decision-making systems, such as credit scoring systems and risk assessment systems. The main advantage of rule-based AI is that it can make decisions quickly and accurately based on predefined rules. However, the downside is that the rules must be manually created and updated by experts, making it inflexible to handle new situations or data.

Machine Learning (ML)

A general concept that refers to the study of computer systems that can learn and improve from experience without being explicitly programmed or reprogrammed by humans. Machine learning systems can analyze large amounts of data, identify patterns and trends, and make predictions or recommendations based on that information. This type of AI is widely used in applications such as recommendation systems, fraud detection, and natural language processing. The key advantage of machine learning is that it can handle complex data and situations, and can adapt to new situations with additional data. Machine learning AI models are typically trained with a finite set of data. Biases can be injected into the AI model when the training data is limited or not diverse enough.

Several subsets of this term specify the type of learning used and/or its functionality:

  • Supervised learning - Well-labeled input and output data allows this type of ML to predict what the correct label would be for some future set of data that hasn’t yet been seen

  • Unsupervised learning - An ML system that learns from unlabeled input and output data that can begin to recognize patterns from the unlabeled data to produce better outputs after ingestion would be considered “unsupervised.” As an example - millions of hours of YT videos in which a system was told a cat was visible but not what pixels in the videos were cats. After so many hours of data ingestion, the system became capable of accurately detecting cats

  • Deep Learning - This is a type of machine learning based on artificial neural networks. Deep learning systems automatically learn to recognize patterns and features in data, and can make predictions or classifications based on that information. This type of AI is widely used in image and speech recognition, natural language processing, and autonomous vehicles. The main advantage of deep learning is its ability to process large amounts of complex data, but it also requires computing power correlated to the amount of input and historical data used to train the models. This means, the larger the historical dataset and amount of incoming data from a physical system (like all the cameras/sensors on an autonomous vehicle) then the larger amount of computing power is needed for accurate and timely outputs. Deep learning models can also be trained with a finite set of data and the accuracy of that data is critical to the accuracy of the outputs of the system. For more on Deep Learning, we recommend this academic study from the University or Toronto

  • Reinforcement Learning (RL) - A system that is not programmed on (or told) which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them. Actions taken may affect not only the immediate reward but the next state through which all subsequent future rewards appear. These two characteristics of trial-and-error with search and delayed reward are the two most important distinguishing features of reinforcement learning. This is the only branch of Machine Learning that is directly concerned with making decisions. We like this visualization of Reinforcement Learning in action using a simple decision-making system to balance a pole on a moving slide:

logo-morsecode

Introduction to reinforcement learning in which a machine learning system knows only that its objective is to balance the pole upright and it can move the cart bidirectional along the x-axis to do so. In this video, you can watch the system refining its strategy to meet the objective function through much trial and error.

Deep Reinforcement Learning (Deep RL)

A subfield of ML that combines Reinforcement Learning (RL) and Deep Learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the State space. Deep RL algorithms can take in large inputs (e.g. every pixel rendered to the screen in a video game) and decide what actions to perform to optimize an objective (e.g. maximizing the game score).

A great example of DRL in use was produced by a team at Google Deepmind (One of Phaidra’s co-founders and current CTO, Veda Panneershelvam was a member of this team). Google Deepmind developed and launched AlphaGo - a neural network leveraging deep reinforcement learning to master the game of Go. In 2015, AlphaGo beat the world’s Go grandmaster in a best-out-of-5 competition hosted in Seoul.

This list of definitions does not capture every aspect of current AI research or even what’s to come as research progresses. Machine Learning is a broad field with different capabilities and use cases. Rule-based systems are good for decision-making that requires predefined rules, while machine learning is good for handling complex data and situations. Deep learning fits best for recognizing patterns in large data and making predictions, while reinforcement learning is most useful where actions need to be generated not just considering immediate reward but future rewards as well. As AI technology continues to advance, we can expect to see more applications and use cases for each of these types of AI.

For further information, here are some other great collections we’ve reviewed with accurate definitions:

Chat-GPT Glossary

What is an AI Agent?

Featured Expert

Learn more about one of our subject matter experts interviewed for this post

author-avatar

Ben Tacka

Business Development Lead

As Business Development Lead, Ben is responsible for identifying the best fit partner organizations for Phaidra to work with. Prior to joining Phaidra, Ben was a member of the Center for Energy Efficiency and Sustainability (CEES) at Trane Technologies. Ben holds an MBA from the Sustainable Innovation MBA program at the University of Vermont and a Bachelor’s in Manufacturing Engineering from Boston University. 
Reach out to see if your organization is a good fit for Phaidra’s services: ben.tacka@phaidra.ai

Share


Recent Posts

logo-morsecode
article-thumbnail

Safety | May 31, 2024

Virtual Plant Operators are being deployed in data center facilities to improve stability and energy efficiency as the industry demand explodes. Read these chronicles of an operator’s experience with AI control deployment in mission critical facilities.

article-thumbnail

Security | May 8, 2024

Delivering supervisory level AI control of mission critical industrial systems requires advanced and forward thinking IT security infrastructure. Learn about how Phaidra puts security at the forefront.

article-thumbnail

Research | April 8, 2024

A thorough look into one of artificial intelligence's most promising forms, reinforcement learning and how it could revolutionize decision-making across industries.

Phaidra Logo
linkedin
Why Phaidra
linkedin
Privacy Policy
© 2024 Phaidra, Inc. All Rights Reserved.
Alfred