Yeah, you know, requests per second. It's just interesting how many requests Redis handles. In my project it's about 6000 now and Redis simply doesn't give a damn and performs as usual. In your case Redis is also sharded, so it's also interesting to know are there any bottlenecks in this mechanisms. Our admins mentioned something with disk I/O when explaining why we won't shard and should wait for a stable release of Redis Cluster
Redis can pretty easily sustain 100k+ per second. Sharding doesn't impact one way or the other here since we shard from the application layer. Each Redis shard doesn't know nor care about any others. Since we direct reads to slaves and only writes to the master, we can push many hundreds of thousands or even millions of reads per shard. As far as disk I/O with sharding. That doesn't make any sense. Redis is a memory based store, disk I/O only matters for persistence which is not at all related to sharding. On waiting for Redis Cluster, you lose all the multikey operations the cluster solution so in our case that loses many of the best things about Redis. In our opinion the sharding strategy is an application layer problem.
Good article, Thanks! I just have one redis instance without sharding or master/slave Is there any way to withstand 'server failures' when I have in simple one instance?
Well if you use the built in persistence, failure won't lose all your data, but you will lose some data. Depends in any case what "withstand" means in your situation. Like how much data is too much to lose in a failure condition.
> @courtneycouch > built in persistence How to achieve 'built in persistence'? using AOF persistence(I think this makes redis read/write slow)? Thanks
Well a few options: - use an ssd (or better RAID SSDs) - use a master that does 0 persistence and a slave handling the AOF We actually use both of these. We use raided SSDs as well as only persisting on a slave server. This is already described in the article as well :)