/sɪmˈdiː/
n. "Single Instruction Multiple Data parallel processing executing identical operation across vector lanes simultaneously."
SIMD is a parallel computing paradigm where one instruction operates on multiple data elements stored in wide vector registers—AVX512 512-bit lanes process 16x FP32 or 8x FP64 simultaneously, accelerating FFT butterflies and matrix multiplies in HPC. CPU vector units like Intel AVX2/ARM SVE2 broadcast scalar opcodes across lanes while masking handles conditional execution without branching.
Key characteristics of SIMD include:
- Vector Widths: SSE 128-bit (4xFP32), AVX2 256-bit (8xFP32), AVX512 512-bit (16xFP32).
- Horizontal Ops: ADDPS/SUBPS/MULPS broadcast across lanes; FMA accelerates BLAS.
- Mask Registers: K0-K7 control per-lane execution avoiding branch divergence.
- Gather/Scatter: Non-contiguous loads/stores for strided access patterns.
- Auto-Vectorization: ICC/GCC -O3 flags detect loop parallelism inserting VMOVDQA.
A conceptual example of SIMD vector addition flow:
1. Load 8x FP32 vectors: VMOVAPS ymm0, [rsi] ; VMOVAPS ymm1, [rdx]
2. SIMD FMA: VFMADD231PS ymm2, ymm1, ymm0 (8 MACs/cycle)
3. Horizontal sum: VHADDPS ymm3, ymm2, ymm2 → reduce across lanes
4. Store result: VMOVAPS [r8], ymm3
5. Advance pointers rsi+=32, rdx+=32, r8+=32
6. Loop 1024x → 8K FLOPS/iteration vs 1K scalarConceptually, SIMD is like a teacher grading identical math problems across 16 desks simultaneously—one instruction (add) operates on multiple data (test answers) yielding 16x speedup when problems match.
In essence, SIMD turbocharges HBM-fed AI training and FFT spectrum analysis on SerDes clusters, vectorizing PAM4 equalization filters while EMI-shielded ENIG boards host vector-optimized Bluetooth basebands.