Strangler Fig Pattern
A migration strategy where new functionality is built as new services alongside the monolith. Over time, the monolith is incrementally replaced — strangled — until it can be removed.
Strangler fig = the vine that slowly grows around a tree and eventually replaces it. You never chop the old tree down — you let the new growth take over branch by branch.
Named after the fig tree that grows around and eventually replaces its host tree. The pattern works by: (1) placing a facade (API gateway or reverse proxy) in front of the monolith; (2) routing new features to new services through the facade; (3) incrementally extracting existing monolith functionality to services and re-pointing the facade; (4) decommissioning the monolith when it no longer handles any traffic. This is the only safe migration path — big-bang rewrites have a notoriously high failure rate in industry experience.
The hardest part of strangler fig is data migration. When you extract a service, it must own its own data store. But the monolith's data may be deeply interleaved with other data in the same tables. The typical approach: (1) the new service writes to its own store AND also writes to the monolith's tables for a period (dual-write); (2) a backfill migrates historical data; (3) after a verification period, the monolith tables are made read-only then dropped. The strangler proxy must handle the case where a request touches both old and new code paths — feature flags or header-based routing can control this at the request level.
My strangler fig implementation places an Nginx or AWS API Gateway in front of the monolith on day one, even before any extraction. This is low-risk and gives us routing control immediately. I extract services in order of highest change frequency and clearest bounded context — not necessarily the largest or most complex. Each extraction follows: extract interface → dual-write → backfill historical data → verify consistency → cut over → decommission monolith handler.
Attempting to extract a service that shares its database with five other monolith modules. Without resolving the data ownership first, you've created a distributed monolith. Always extract ownership of the data before extracting the service.