Concurrencyhigh

Threading

Python's threading module provides OS threads that share memory. Useful for I/O-bound concurrency. CPU-bound tasks see no speedup due to the GIL.

Memory anchor

Threads = cooks sharing one kitchen (memory) but taking turns at one stove (GIL). Great when everyone is waiting for the oven (I/O), useless when everyone needs the stove (CPU).

Expected depth

threading.Thread(target=fn, args=(...)).start() launches a thread. Thread safety for shared state requires explicit synchronization: threading.Lock(), RLock (reentrant lock), Semaphore, Event, Condition. ThreadPoolExecutor from concurrent.futures provides a higher-level API with submit() and as_completed().

Deep — senior internals

Thread creation is expensive (~1ms overhead per thread). For I/O-bound workloads with high concurrency, asyncio is more efficient. The GIL is released during I/O syscalls, during C extension calls that drop it, and every 5ms (sys.getswitchinterval()). Daemon threads die with the main thread. threading.local() provides per-thread storage. Lock acquisition order bugs cause deadlocks — always acquire locks in a consistent order or use timeout parameters. Python 3.13 adds experimental free-threaded mode (--disable-gil build).

🎤Interview-ready answer

Threading is ideal for I/O-bound work — file reads, network calls, database queries — because the GIL releases during I/O, so threads genuinely run concurrently during those waits. For CPU-bound work, threads give no benefit (and actually add overhead) due to the GIL. Use ThreadPoolExecutor for a clean pool-based API. For shared mutable state, use Lock or higher-level Queue. For high-concurrency I/O, asyncio is more efficient than threads.

Common trap

Even in Python, shared mutable state between threads is dangerous. The GIL prevents data corruption for simple reference count updates, but compound operations (read-modify-write) are not atomic. Use threading.Lock() for any critical section.

Related concepts