An AI technique is any method or approach that enables a computer system to perform tasks normally associated with human intelligence, such as recognizing patterns, making decisions, or learning from experience. The term is broad by design: it covers everything from simple rule-based systems that follow logical “if-then” steps to complex neural networks that teach themselves by processing millions of examples. Understanding the major categories of AI techniques helps clarify how different applications, from spam filters to self-driving cars, actually work under the hood.
Two Foundational Approaches to AI
AI techniques broadly fall into two camps that have shaped the field since its earliest days: symbolic and connectionist.
Symbolic AI works from the top down. It represents knowledge as explicit rules and logical statements, then reasons through them step by step. Think of a tax preparation program that applies specific legal rules to your financial data to determine your deductions. These systems are inherently transparent because you can trace exactly why they reached a given conclusion. Their main limitation is that someone has to manually encode all the rules, which becomes impractical as problems get more complex.
Connectionist AI works from the bottom up. Instead of following pre-written rules, it learns patterns directly from large amounts of data using structures loosely inspired by the brain. This is the approach behind modern machine learning and deep learning. Connectionist systems excel at processing messy, real-world information like images, speech, and text, but they can be difficult to interpret because the “reasoning” is distributed across thousands or millions of numerical connections rather than readable rules. Today’s most powerful AI systems, including large language models, are connectionist. Increasingly, researchers combine both approaches, using symbolic methods to add transparency to data-driven systems, and using connectionist methods to make rule-based systems more flexible.
Machine Learning and Its Three Paradigms
Machine learning is the most widely used family of AI techniques today. Rather than being explicitly programmed for every scenario, a machine learning system improves its performance by analyzing data. There are three core paradigms, each suited to different kinds of problems.
Supervised Learning
In supervised learning, the system trains on labeled data, meaning each example comes with the correct answer already attached. A spam filter, for instance, learns from thousands of emails that have already been tagged as “spam” or “not spam.” Over time, it learns which features (certain words, sender patterns, formatting) predict each category. The model measures its own accuracy against the known labels and adjusts until it gets reliably close. Beyond spam detection, supervised learning powers sentiment analysis, weather forecasting, and price prediction. It handles two main jobs: classification (sorting things into categories) and regression (predicting a number, like a home’s sale price based on its features).
Unsupervised Learning
Unsupervised learning works with unlabeled data. No one tells the system what the “right” answer is. Instead, it finds hidden structure on its own. Clustering is the most common application: grouping customers by purchasing behavior, for example, without pre-defining what those groups should look like. Association techniques discover relationships between variables, like noticing that people who buy a certain product also tend to buy another. A third use, dimensionality reduction, simplifies datasets with hundreds of variables down to a manageable number while preserving the important patterns, making it easier for other techniques to work with the data.
Reinforcement Learning
Reinforcement learning takes a different approach entirely. An agent interacts with an environment, takes actions, and receives rewards or penalties based on outcomes. Over many rounds of trial and error, it learns a strategy that maximizes its total reward. This is the technique behind game-playing AI systems that master chess or Go, as well as robotics applications where a machine learns to walk or grasp objects through repeated attempts.
Neural Networks and Deep Learning
Neural networks are the engine behind deep learning, one of the most powerful AI techniques available. A neural network is organized into layers of interconnected nodes (often called neurons). Each connection carries a numerical weight that determines how strongly one node influences the next. During training, the network adjusts these weights until the outputs closely match the desired results.
The layers closest to the input capture raw, low-level features. In an image recognition system, for example, early layers detect simple edges and color gradients. Deeper layers combine those basic features into progressively more complex representations: edges become shapes, shapes become faces. The layers closest to the output produce high-level representations that closely correspond to the final categories the system is trying to distinguish. This layered transformation is the “learning mechanism” of a neural network, and it’s what makes deep learning so effective at tasks like image recognition, speech processing, and language generation.
Self-Attention and Language Models
The technique powering modern AI chatbots and translation tools is called the transformer architecture, and its key innovation is a mechanism called self-attention. When processing a sentence, self-attention allows the model to weigh how important each word is in relation to every other word, regardless of how far apart they are. In the sentence “The dog didn’t cross the street because it was too tired,” self-attention helps the model figure out that “it” refers to “the dog” rather than “the street.”
The system assigns each word a set of numerical scores reflecting how much focus it should give to the other words around it. These scores are then normalized into probabilities, so the model can dynamically decide which parts of the input matter most for predicting the next word or generating a response. The higher a word’s attention score relative to another word, the more influence it has on the output. This ability to capture long-range relationships in text is what makes large language models so effective at understanding context and producing coherent writing.
Search and Optimization Techniques
Not all AI techniques involve learning from data. Some of the oldest and most reliable methods are search and optimization algorithms that systematically explore possible solutions to find the best one.
Heuristic search is a classic example. The A* algorithm, widely used in navigation and video games, finds the shortest path between two points by combining the known cost of the path so far with an educated guess about the remaining distance. That “educated guess” is the heuristic, a rule of thumb that helps the system skip over obviously bad options and focus on promising ones.
The minimax algorithm handles adversarial situations like board games, where two opponents are each trying to win. It evaluates possible future moves by assuming your opponent will always make the best move available to them. In chess, since it’s impossible to calculate every possible game from the opening move to checkmate, minimax is paired with heuristics that estimate the value of a board position. Pieces are assigned approximate values (a queen is worth about 9 points, a rook about 5) so the system can evaluate which moves lead to a material advantage without seeing the entire game tree. Games involving randomness, like backgammon, use a variation called expectimax that factors in the probability of different dice outcomes.
Genetic Algorithms
Genetic algorithms borrow directly from biological evolution to solve optimization problems. They’re especially useful when the space of possible solutions is enormous and traditional search methods would take too long.
The process starts by generating a random population of candidate solutions, each encoded as a string of values (analogous to a chromosome). Every candidate is then scored on how well it solves the problem, a measure called its fitness. The best-performing candidates are selected as “parents” for the next generation, following the principle that better individuals have a higher chance of reproducing. Crossover combines parts of two parents at a random point to create new offspring, mixing traits from both. Mutation then randomly tweaks one or more values in the offspring, introducing variety that prevents the population from getting stuck on a mediocre solution. This cycle of evaluation, selection, crossover, and mutation repeats over many generations, gradually producing better and better solutions.
Mutation is particularly important because without it, genetic algorithms tend to converge too quickly on a “local optimum,” a solution that’s better than its neighbors but not the best overall. By injecting random changes, mutation pushes the search into unexplored territory.
How AI Techniques Differ From Algorithms
People often use “AI technique” and “algorithm” interchangeably, but they sit at different levels. An algorithm is a specific set of step-by-step instructions for completing a task. An AI technique is the broader strategy or framework that uses one or more algorithms to achieve intelligent behavior. Machine learning is a technique; gradient descent (the math that adjusts a neural network’s weights) is one of the algorithms it uses. Reinforcement learning is a technique; the specific rules for updating reward estimates are algorithms within it. In short, algorithms are the building blocks, and AI techniques are the structures built from them.
The EU’s AI Act reflects how broadly the term “AI technique” is understood in practice. Its transparency requirements cover AI systems ranging from chatbots to emotion recognition tools to deepfake generators, each built on different underlying techniques but all subject to rules about informing users that they’re interacting with AI and marking AI-generated content using methods like watermarks, metadata tags, and cryptographic verification.