/baɪt/
noun … “the standard unit of digital storage.”
Byte is the fundamental unit of memory in computing, typically consisting of 8 bits. Each bit can represent a binary state, either 0 or 1, so a Byte can encode 256 unique values from 0 to 255. This makes it the basic building block for representing data such as numbers, characters, or small logical flags in memory or on disk.
The Byte underpins virtually all modern computing architectures. Memory sizes, file sizes, and data transfer rates are commonly expressed in multiples of Byte, such as kilobytes, megabytes, and gigabytes. Hardware registers, caches, and network protocols are typically organized around Byte-addressable memory, making operations predictable and efficient.
Many numeric types are defined in terms of Byte. For example, INT8 and UINT8 occupy exactly 1 Byte, while wider types like INT16 or UINT16 use 2 Bytes. Memory alignment, packing, and low-level binary protocols rely on this predictable sizing.
In practice, Byte serves as both a measurement and a container. A character in a text file, a pixel in a grayscale image, or a small flag in a network header can all fit in a single Byte. When working with larger datasets, Bytes are grouped into arrays or buffers, forming the foundation for everything from simple files to high-performance scientific simulations.
The intuition anchor is simple: Byte is a tiny crate for bits—small, standard, and indispensable. Every piece of digital information passes through this basic container, making it the heartbeat of computing.