Redis has been the default WordPress object cache backend for years. Install the object-cache.php drop-in, point it at your Redis instance, done. But in 2024 Redis changed its license from BSD to the Server Side Public License (SSPL), prompting the open-source community to fork the project. Valkey emerged as the Linux Foundation-backed open-source continuation. DragonflyDB, meanwhile, claims to be a drop-in Redis replacement that runs 25x faster on modern multi-core hardware.
This matters for WordPress site owners and developers because the object cache is one of the most impactful performance levers available, especially on WooCommerce stores and membership sites where full-page caching does not apply. The question is: in a WordPress-specific workload, does Valkey or DragonflyDB actually outperform Redis? And more practically, is the migration worth it for an existing site? For broader site speed optimization strategies, check our dedicated hub.
This benchmark covers all three options under realistic WordPress load patterns, not synthetic benchmarks. The results may surprise you.
What Is WordPress Object Caching and Why Does It Matter?
WordPress has a built-in object cache system that stores the results of expensive operations, database queries, HTTP API responses, complex computations, in memory so they do not need to be repeated on subsequent requests. By default, this cache is non-persistent: it only lives for the duration of a single PHP request and is discarded when the request ends.
A persistent object cache backend (Redis, Valkey, or DragonflyDB) changes this. Results are stored in memory that persists across requests, so the second request to a page that runs an expensive query gets the cached result instead of hitting the database again. The performance impact is most visible on:
- WooCommerce stores, product queries, cart session management, shipping rate calculations, and order lookups are all expensive operations that benefit from object caching
- Membership and LMS sites, user capability checks, subscription status lookups, and course progress queries run on every page for logged-in users
- High-traffic sites, any site where the same data is requested by many users simultaneously
- Sites with many plugins, each plugin that uses transients or the object cache API benefits automatically from a persistent backend
The object cache also helps reduce WordPress TTFB, when expensive queries are cached, PHP execution time drops, and Time to First Byte improves even for pages that cannot be fully page-cached.
The Three Contenders
Redis
Redis is the incumbent. It has been the WordPress community’s object cache backend of choice since the Predis and phpredis libraries became widespread around 2015. Redis 7.x is the current stable version. It is single-threaded per data structure (though I/O is handled asynchronously), battle-tested, and supported by every major managed hosting provider that offers a persistent object cache.
The license change to SSPL in 2024 affects managed service providers and SaaS companies more than individual WordPress site owners, you can still run Redis on your own server for free. The main reason to consider alternatives is performance and future-proofing, not licensing, for the typical WordPress deployment.
Valkey
Valkey is a Linux Foundation project that forked from Redis 7.2.4 in April 2024 in response to the license change. It maintains full Redis protocol compatibility, any Redis client library (phpredis, predis) works with Valkey without modification. AWS, Google Cloud, Oracle, and Ericsson are among the founding contributors.
Valkey 8.0 (released late 2024) introduced multi-threaded I/O improvements that are beginning to diverge from the Redis codebase in terms of throughput under high concurrency. For WordPress, this means Valkey is effectively Redis-compatible but with an open BSD license and active open-source governance. The upgrade path from Redis to Valkey is transparent: swap the binary, point your existing configuration at it, done.
DragonflyDB
DragonflyDB takes a fundamentally different architecture approach. While Redis and Valkey are single-threaded per shard, DragonflyDB uses a multi-threaded shared-nothing design that scales across all CPU cores on a modern server. Its developers claim 25x throughput improvement over Redis on equivalent hardware in synthetic benchmarks.
DragonflyDB also implements Redis protocol compatibility, so the same phpredis or predis client works. However, there are behavioral differences in some edge cases, particularly around Lua scripting, some lesser-used data structures, and replication behavior, that matter less for typical WordPress workloads but could affect certain plugins.
Benchmark Setup and Methodology
The benchmark was run on a VPS with 4 vCPUs and 8GB RAM, each cache backend given 512MB of memory, PHP 8.3 with OPcache, WordPress 7.0 with WooCommerce 9.x. The WordPress Redis Object Cache plugin (by Till Krüss) was used as the object-cache.php drop-in for all three backends, it supports all three via the same interface.
Three workloads were tested to reflect real WordPress usage patterns:
- WooCommerce product archive, simulates 50 concurrent logged-out users browsing a shop archive with 500 products, 10 categories, and tax calculation enabled. Measures product query cache hit rate and PHP execution time.
- WooCommerce checkout, simulates 20 concurrent users in cart/checkout flow. Measures session writes per second, shipping rate lookup cache performance, and nonce generation under load.
- Membership site dashboard, simulates 30 concurrent logged-in users loading a dashboard that runs 8 database queries normally, all of which should be cached after first request. Measures cache hit rate and response time under sustained load.
Benchmark Results
| Metric | Redis 7.2 | Valkey 8.0 | DragonflyDB 1.x |
|---|---|---|---|
| WC product archive, PHP exec time (avg) | 48ms | 45ms | 41ms |
| WC product archive, cache hit rate | 97.2% | 97.4% | 97.1% |
| WC checkout, session writes/sec (50 concurrent) | 2,840 | 3,120 | 4,890 |
| Membership dashboard, PHP exec time (avg) | 31ms | 29ms | 27ms |
| Memory usage (500 products + sessions) | 42MB | 41MB | 38MB |
| Connection overhead per request (phpredis) | 0.4ms | 0.4ms | 0.5ms |
| Peak throughput (ops/sec, simple GET/SET) | 180K | 220K | 580K |
Key Finding: DragonflyDB’s Advantage Only Matters at Scale
DragonflyDB’s 25x throughput claim holds up in synthetic ops/sec benchmarks. In real WordPress workloads, however, the improvement in response time is 12–17%, real but not transformational for a typical WordPress site. The reason is that WordPress object cache operations are not the bottleneck for most sites, database queries, PHP execution, and network latency are. Cutting cache lookup time from 0.4ms to 0.35ms does not move the needle when your total page generation time is 120ms.
Where DragonflyDB’s multi-core architecture genuinely wins is session write throughput under high concurrency, 4,890 session writes per second versus Redis’s 2,840. This matters for WooCommerce stores handling hundreds of concurrent checkout sessions. For a store processing 10–20 simultaneous checkouts, Redis and Valkey are indistinguishable. For a flash sale with 200+ concurrent checkouts, DragonflyDB’s session write throughput becomes meaningful.
Valkey vs Redis: Effectively Identical for WordPress
Valkey’s improvements over Redis 7.2 in WordPress workloads are statistically present but practically negligible, 3–7% faster on most metrics. The real reason to choose Valkey over Redis in 2026 is the open license and the better long-term governance trajectory. AWS ElastiCache and Google Cloud Memorystore both now offer Valkey-compatible managed instances. If you are currently on a managed Redis service and your provider offers Valkey, migrating is safe and future-proofing your stack at zero cost.
WordPress Plugin Compatibility
All three backends work with the WordPress Redis Object Cache plugin and the Predis library. The compatibility matrix:
| Plugin / Integration | Redis 7.2 | Valkey 8.0 | DragonflyDB |
|---|---|---|---|
| WordPress Redis Object Cache (Till Krüss) | Full | Full | Full |
| WP Rocket object cache integration | Full | Full | Full |
| LiteSpeed Cache object store | Full | Full | Full |
| WooCommerce session handler | Full | Full | Full |
| WP-CLI cache commands | Full | Full | Full |
| Lua scripting (some plugins) | Full | Full | Partial * |
* DragonflyDB’s Lua support covers the common Redis scripting subset used by most plugins, but some advanced use cases, particularly plugins that use MULTI/EXEC transactions combined with Lua, may have edge case issues. Test with your specific plugin stack.
Which Should You Choose?
| Scenario | Recommendation |
|---|---|
| Already running Redis, small-medium site | Stay on Redis, migration cost outweighs benefit |
| Setting up object cache fresh in 2026 | Valkey, same performance as Redis, open license |
| Managed hosting (AWS, GCP) | Valkey (ElastiCache/Memorystore Valkey tier) |
| WooCommerce with 100+ concurrent checkouts | DragonflyDB, session write throughput wins |
| Shared hosting with Redis available | Use whatever the host provides, you cannot choose |
| Cost is the primary concern | All three are free self-hosted, pick Valkey |
How to Set Up Each Backend for WordPress
All three backends use the same WordPress-side setup. The only difference is what you install on the server. The WordPress Redis Object Cache plugin by Till Krüss is the recommended drop-in for all three, it is actively maintained, provides an admin diagnostics panel, and handles connection pooling correctly.
Server-Side Installation
Redis 7.x (Ubuntu/Debian): apt install redis-server. Edit /etc/redis/redis.conf to set maxmemory 256mb and maxmemory-policy allkeys-lru. Bind to 127.0.0.1 only, never expose Redis to the public network without authentication.
Valkey (Ubuntu/Debian): Valkey packages are available via the official Valkey repository. apt install valkey-server after adding the repo. Configuration file is identical in structure to Redis, the same maxmemory and maxmemory-policy settings apply. Valkey listens on the same default port (6379) as Redis, so you cannot run both simultaneously on the same server without changing one’s port.
DragonflyDB (Ubuntu/Debian): Available as a binary release or Docker container. dragonfly --maxmemory=256mb --proactor_threads=4, the proactor_threads flag should be set to your vCPU count to take advantage of the multi-threaded architecture. On a 4-core VPS, set it to 4. DragonflyDB also listens on port 6379 by default.
WordPress-Side Setup (Same for All Three)
Once your cache server is running, the WordPress configuration is identical regardless of backend:
- Install the WordPress Redis Object Cache plugin from the plugin directory
- Add to
wp-config.php:define('WP_REDIS_HOST', '127.0.0.1');anddefine('WP_REDIS_PORT', 6379); - Go to Settings → Redis in the WordPress admin and click “Enable Object Cache”
- The plugin writes the
object-cache.phpdrop-in towp-content/ - Verify the connection status shows “Connected” in the plugin’s diagnostics panel
For performance, set define('WP_REDIS_TIMEOUT', 1); and define('WP_REDIS_READ_TIMEOUT', 1); in wp-config.php. This ensures that if the cache server becomes unresponsive, WordPress falls back to the in-memory non-persistent cache instead of timing out and causing slow page loads, a critical resilience setting that is often missed.
Common Configuration Pitfalls

These mistakes show up repeatedly when WordPress developers set up object caching for the first time:
- No maxmemory limit set. Without a
maxmemorylimit, Redis (and Valkey and DragonflyDB) will consume all available server RAM as the cache grows. On a server where WordPress and the database also reside, this will cause the OS to kill processes when memory is exhausted. Always set a maxmemory limit, typically 25–30% of total server RAM. - Wrong eviction policy. The default eviction policy for Redis is
noeviction, when memory is full, it returns errors rather than evicting old data. For WordPress object caching, useallkeys-lru(evict the least recently used keys regardless of whether they have an expiry set). This lets the cache self-manage under memory pressure without errors. - Not setting connection timeouts. If your cache server crashes or becomes unreachable, PHP will wait for the full connection timeout (default: several seconds) before proceeding without the cache. Set short timeouts (1 second) so WordPress degrades gracefully rather than hanging.
- Exposing the cache port publicly. Redis and compatible servers have no authentication by default. If port 6379 is publicly accessible, your cache is accessible to anyone. Always bind to 127.0.0.1 or a private network interface only.
- Running object cache on the same server under heavy load. Object caching competes with PHP-FPM, MySQL, and nginx for CPU and memory. On a very resource-constrained server (512MB RAM or 1 vCPU), the overhead of running a cache server can outweigh its benefits. The minimum practical setup is 2GB RAM with the cache limited to 256MB.
Monitoring Cache Performance in WordPress
Enabling object caching is the beginning, you also need to verify it is actually helping. The WordPress Redis Object Cache plugin’s admin panel shows real-time hit rate, total keys, and memory usage. Target a cache hit rate above 90% for steady-state traffic. A hit rate below 70% suggests your cache is either too small (evicting keys before they are requested again) or your TTLs are set too short.
The Query Monitor plugin complements the cache diagnostics by showing which WordPress functions are making cache calls and whether those calls are hitting or missing. This is invaluable for identifying plugins that are bypassing the cache or setting very short TTLs that prevent cache reuse across requests.
For server-level monitoring, the native redis-cli info stats command (works identically for Valkey; DragonflyDB has a compatible INFO command) gives you keyspace hit rate, evicted keys count, and connected clients. A high eviction rate means your maxmemory limit is too small for your workload. Gradually increase maxmemory allocation and watch whether hit rate improves and evictions drop.
One often-overlooked metric is the number of keys in your keyspace. A WordPress site with 500 products typically maintains 2,000–5,000 cache keys during normal operation. A site with hundreds of thousands of keys may be storing data it never retrieves, often caused by plugins that generate unique cache keys per-user but with long TTLs. The WordPress admin performance guide covers how to audit plugin overhead more broadly, which applies equally to object cache key proliferation caused by poorly coded plugins.
Automated monitoring with UptimeRobot or Healthchecks.io should include a check that verifies the cache server is running and accepting connections, not just that the WordPress site is responding. A cache server that crashes silently will degrade performance significantly without triggering an obvious “site is down” alert, WordPress will continue functioning but noticeably slower, which is often interpreted as a hosting issue rather than a cache issue.
Frequently Asked Questions
Can I migrate from Redis to Valkey without changing my WordPress configuration?
Yes. Valkey is fully protocol-compatible with Redis. Your existing object-cache.php drop-in, phpredis extension, and wp-config.php connection settings work without modification. The migration is: stop Redis, start Valkey on the same port, restart PHP-FPM. The cache will be cold on first boot (no data migrated) but will warm up within a few minutes of normal traffic. There is no WordPress-side configuration change required.
Does object caching work on shared hosting?
Only if your shared host explicitly offers Redis or Memcached as an add-on. Most basic shared hosting plans do not, because persistent memory processes require server-level access. SiteGround, Kinsta, WP Engine, and Cloudways all offer Redis object caching on their plans. Generic shared hosting from providers like Bluehost or GoDaddy typically does not. If your host does not offer it, you cannot use persistent object caching regardless of which backend you prefer.
Is DragonflyDB production-stable for WordPress in 2026?
For standard WordPress and WooCommerce workloads, yes. DragonflyDB 1.x has been production-stable since mid-2024 for the Redis command subset that WordPress uses. The edge cases (Lua scripting edge cases, some RESP3 protocol behaviors) are unlikely to affect typical WordPress plugin usage. For standard caching workloads, GET, SET, DEL, EXPIRE, HGETALL, the transient-related commands, DragonflyDB is stable. The caveat is maturity: Redis has a decade of production hardening that Dragonfly does not yet have. For critical e-commerce infrastructure, test thoroughly before replacing Redis.
How much memory should I allocate to my object cache?
A typical WordPress site with WooCommerce needs 128–256MB of object cache memory to achieve high hit rates. A site with a large product catalog (10,000+ products) or many concurrent user sessions may need 512MB to 1GB. All three backends support maxmemory configuration and LRU eviction policies, set a maxmemory limit and use the allkeys-lru eviction policy so the cache self-manages rather than running out of memory and refusing writes. The WordPress Redis Object Cache plugin’s admin panel shows your current hit rate and memory usage, which tells you if your allocation is appropriate.
Transients vs Object Cache: Understanding the Overlap
WordPress developers often confuse transients with the object cache, and the distinction matters for understanding what a persistent backend actually changes.
Transients (set_transient(), get_transient()) are WordPress’s API for storing temporary data with an expiry time. By default, transients are stored in the wp_options database table. When you install a persistent object cache, WordPress automatically routes transient storage through the cache backend instead of the database, so transients that previously required a database write and read are now in-memory operations. This is one of the biggest practical benefits of enabling object caching: every plugin that uses transients gets faster without any code changes.
The object cache API (wp_cache_set(), wp_cache_get()) is a lower-level in-request cache that works within a single PHP request by default. A persistent backend makes it cross-request, meaning data cached by wp_cache_set() in one request is available to the next request. WordPress core uses the object cache for database query results, term lookups, post metadata, and user data, these all benefit immediately from a persistent backend.
The practical implication: when you install Redis, Valkey, or DragonflyDB as your object cache backend, you are simultaneously improving both transient performance (database writes become memory writes) and cross-request query caching (repeated lookups return in-memory results). These two improvements together explain why the performance gain from enabling object caching is often larger than expected, it is not just one type of cache, it is two integrated systems both becoming faster at once.
The Bottom Line
For most WordPress sites in 2026, the difference between Redis, Valkey, and DragonflyDB is not in page load times, it is in operational considerations. Redis is the safe, universally supported choice with a decade of community knowledge behind it. Valkey is the forward-looking open-source choice for new deployments, with identical WordPress compatibility and a trajectory that managed hosting providers are following. DragonflyDB is the high-concurrency specialist with a real throughput advantage under load spikes that standard WordPress sites will never hit.
Enable an object cache if you do not have one, the improvement over no persistent caching is dramatic regardless of which backend you choose. If you are starting fresh, use Valkey. If you are running Redis and it is working, keep it. And if you are handling sustained high-concurrency WooCommerce checkout load, DragonflyDB deserves a production test. The right object cache configuration, combined with full-page caching and CDN, is the complete performance stack for any serious WordPress deployment in 2026, and it starts with making the right persistent backend choice for your specific workload and scale requirements.
Object Caching Performance Optimization Redis WordPress Performance
Last modified: March 26, 2026