Synchronization

/ˌsɪŋkrənaɪˈzeɪʃən/

noun — "coordination of concurrent execution."

Synchronization is the set of techniques used in computing to coordinate the execution of concurrent threads or processes so they can safely share resources, exchange data, and maintain correct ordering of operations. Its primary purpose is to prevent race conditions, ensure consistency, and impose well-defined execution relationships in systems where multiple units of execution operate simultaneously.

Mutex

/ˈmjuːtɛks/

noun — "locks a resource to one thread at a time."

Mutex, short for mutual exclusion, is a synchronization primitive used in multithreaded or multiprocess systems to control access to shared resources. It ensures that only one thread or process can access a critical section or resource at a time, preventing race conditions, data corruption, or inconsistent state. When a thread locks a mutex, other threads attempting to acquire the same mutex are blocked until it is released.

Virtual Memory

/ˈvɜːrtʃuəl ˈmɛməri/

noun — "memory abstraction larger than physical RAM."

Virtual Memory is a memory management technique that allows a computer system to present each process with the illusion of a large, contiguous address space, regardless of the actual amount of physical memory installed. It decouples a program’s view of memory from the hardware reality, enabling systems to run applications whose memory requirements exceed available RAM while maintaining isolation, protection, and efficiency.

Signal Processing

/ˈsɪɡnəl ˈprɑːsɛsɪŋ/

noun … “Analyzing, modifying, and interpreting signals.”

Signal Processing is the field of engineering and computer science concerned with the analysis, transformation, and manipulation of signals to extract information, improve quality, or enable transmission and storage. Signals can be analog (continuous) or digital (discrete), representing phenomena such as sound, images, temperature, or electromagnetic waves.

Reference Counting

/ˈrɛfərəns ˈkaʊntɪŋ/

noun … “Track object usage to reclaim memory.”

Reference Counting is a memory management technique in which each object maintains a counter representing the number of references or pointers to it. When the reference count drops to zero, the object is no longer accessible and can be safely deallocated from heap memory. This method is used to prevent memory leaks and manage lifetimes of objects in languages like Python, Swift, and Objective-C.

Key characteristics of Reference Counting include:

Wear Leveling

/wɛər ˈlɛvəlɪŋ/

noun … “Evenly distribute writes to prolong memory lifespan.”

Wear Leveling is a technique used in non-volatile memory devices, such as Flash storage and SSDs, to prevent certain memory blocks from wearing out prematurely due to repeated program/erase cycles. Flash memory cells have a limited number of write cycles, and wear leveling distributes writes across the device to ensure all blocks age uniformly, extending the effective lifespan of the storage.

Garbage Collection

/ˈɡɑːrbɪdʒ kəˈlɛkʃən/

noun … “Automatic memory reclamation.”

Garbage Collection is a runtime process in programming languages that automatically identifies and reclaims memory occupied by objects that are no longer reachable or needed by a program. This eliminates the need for manual deallocation and reduces memory leaks, particularly in managed languages like Java, C#, and Python. Garbage collection works closely with heap memory, tracking allocations and references to determine which memory blocks can be safely freed.

Cache Coherency

/kæʃ koʊˈhɪərəns/

noun … “Keeping multiple caches in sync.”

Cache Coherency is the consistency model ensuring that multiple copies of data in different caches reflect the same value at any given time. In multiprocessor or multi-core systems, each CPU may have its own cache, and maintaining coherency prevents processors from operating on stale or conflicting data. Cache coherency is critical for correctness in concurrent programs and high-performance systems.