Home/Glossary/Neural Network
Models

Neural Network

A computational model loosely inspired by the brain, made of interconnected layers of weighted nodes.

Full Definition

A neural network is a machine learning model composed of layers of interconnected nodes (neurons), each performing a weighted sum of inputs followed by a nonlinear activation function. During training, the weights are adjusted via backpropagation and gradient descent to minimise a loss function on a labelled dataset. Deep neural networks (DNNs) stack many layers, allowing them to learn hierarchical representations — edges → shapes → objects in vision; characters → words → syntax → semantics in language. Modern LLMs are transformer-based deep neural networks with billions of parameters. Understanding neural network fundamentals is prerequisite knowledge for understanding why LLMs behave as they do.

Examples

1

A three-layer feedforward network learning to classify handwritten digits (MNIST) with 98% accuracy.

2

A transformer neural network learning to predict the next word in a sentence by adjusting 175 billion weights over trillions of training examples.

Apply this in your prompts

PromptITIN automatically uses techniques like Neural Network to build better prompts for you.

✦ Try it free

Related Terms

Transformer

The neural network architecture that underpins all modern large language models,

View →

Attention Mechanism

The core transformer operation that weighs the relevance of each token to every

View →

Embedding

A dense numerical vector that represents a token, sentence, or document in a con

View →
← Browse all 100 terms