The TechCrunch AI glossary

By TechCrunch | Created at 2025-03-02 15:08:57 | Updated at 2025-03-04 12:24:23 1 day ago

Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.

We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks.


AI agent

An AI agent refers to a tool that makes use of AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so different people can mean different things when they refer to an AI agent. Infrastructure is also still being built out to deliver on envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multi-step tasks.

Chain of thought

Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller between a giraffe and a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows).

In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be right, especially in a logic or coding context. So-called reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.

(See: Large language model)

Deep learning

A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain.

Deep learning AIs are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). It also typically takes longer to train deep learning vs. simpler machine learning algorithms — so development costs tend to be higher.

(See: Neural network)

Fine tuning

This means further training of an AI model that’s intended to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e. task-oriented) data. 

Many AI startups are taking large language models as a starting point to build a commercial product but vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.

(See: Large language model (LLM))

Large language model (LLM)

Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters.

AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product.

LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words.

Those are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.

(See: Neural network)

Neural network

Neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. 

Although the idea to take inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, whether for voice recognition, autonomous navigation, or drug discovery.

(See: Large language model (LLM))

Weights

Weights are core to AI training as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output. 

Put another way, weights are numerical parameters that define what’s most salient in a data set for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.

For example, an AI model for predicting house prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached, semi-detached, if it has or doesn’t have parking, a garage, and so on. 

Ultimately, the weights the model attaches to each of these inputs is a reflection of how much they influence the value of a property, based on the given data set.

Natasha is a senior reporter for TechCrunch, joining September 2012, based in Europe. She joined TC after a stint reviewing smartphones for CNET UK and, prior to that, more than five years covering business technology for silicon.com (now folded into TechRepublic), where she focused on mobile and wireless, telecoms & networking, and IT skills issues. She has also freelanced for organisations including The Guardian and the BBC. Natasha holds a First Class degree in English from Cambridge University, and an MA in journalism from Goldsmiths College, University of London.

Romain Dillet is a Senior Reporter at TechCrunch. He has written over 3,000 articles on technology and tech startups and has established himself as an influential voice on the European tech scene. He has a deep background in startups, privacy, security, fintech, blockchain, mobile, social and media. With twelve years of experience at TechCrunch, he’s one of the familiar faces of the tech publication that obsessively covers Silicon Valley and the tech industry. In fact, his career started at TechCrunch when he was 21. Based in Paris, many people in the tech ecosystem consider him as the most knowledgeable tech journalist in town. Romain likes to spot important startups before anyone else. He was the first person to cover N26, Revolut and DigitalOcean. He has written scoops on large acquisitions from Apple, Microsoft and Snap. When he’s not writing, Romain is also a developer — he understands how the tech behind the tech works. He also has a deep historical knowledge of the computer industry for the past 50 years. He knows how to connect the dots between innovations and the effect on the fabric of our society. Romain graduated from Emlyon Business School, a leading French business school specialized in entrepreneurship. He has helped several non-profit organizations, such as StartHer, an organization that promotes education and empowerment of women in technology, and Techfugees, an organization that empowers displaced people with technology.

Read Entire Article