Connection Pooling
Connection pooling maintains a pool of pre-established database connections that are reused across application requests, rather than opening a new connection for each request. Opening a PostgreSQL connection takes 50-100ms and allocates ~10MB of RAM per backend process. With hundreds of application servers making requests, connection pooling prevents exhausting the database and eliminates connection overhead.
Connection pooling is a hotel lobby phone: 1000 guests share 20 phones because nobody talks all day. Without a pool, every guest gets a dedicated phone line — you run out of lines and the phone company (PostgreSQL) collapses at ~200 simultaneous calls.
PgBouncer is the standard connection pooler for PostgreSQL. It operates in three modes: session (one-to-one mapping, least benefit), transaction (connection returned to pool after each transaction — the most common choice), and statement (connection returned after each statement — breaks multi-statement transactions). Transaction mode is the sweet spot: 20 PgBouncer connections can serve 1000+ application connections because most application time is spent doing non-database work. Key settings: default_pool_size (connections per user/database pair), max_client_conn (total client connections allowed), server_idle_timeout (how long idle server connections live). In serverless environments (Lambda, Cloud Functions), connection pooling is critical because each invocation might open a new connection — use RDS Proxy (AWS) or PgBouncer sidecar.
Transaction-mode PgBouncer has limitations: prepared statements do not work (the prepared statement is tied to a server connection that may change between uses), session-level settings (SET work_mem) are lost between transactions, LISTEN/NOTIFY does not work. Some ORMs use prepared statements aggressively (e.g., Rails, SQLAlchemy) — you may need to disable them or use session mode. Connection pool sizing follows Littles Law: pool_size = throughput * average_latency. If your app processes 100 queries/sec with 10ms average latency, you need 100 * 0.01 = 1 active connection. In practice, add headroom for variance. The PostgreSQL-native connection pooler pgbouncer alternative is now being worked on as a built-in feature. For MySQL, ProxySQL provides similar pooling plus query routing, read/write splitting, and query caching.
Connection pooling is non-negotiable for any production PostgreSQL deployment. I use PgBouncer in transaction mode, which allows 20-30 database connections to serve thousands of application connections. The key insight is that applications spend most of their time doing non-database work (processing, network calls), so the database connection only needs to be held during the actual transaction. Pool sizing follows Littles Law: connections = throughput times latency. For serverless environments where each invocation might create a new connection, I use a managed pooler like RDS Proxy. The main gotcha with PgBouncer transaction mode is that prepared statements and session-level settings do not work — you need to configure your ORM accordingly.
Setting max_connections in PostgreSQL to a very high number (like 1000) instead of using a connection pooler. PostgreSQL performance degrades significantly beyond ~200 connections due to lock contention and memory pressure. The correct solution is PgBouncer in front with a lower max_connections (100-200) on the database.